WEKO3
アイテム
Enhancing Sparse DNN Inference on GPUs: Adaptive Tile Pruning and Split-Tiled Sparse Matrix Multiplication
https://ipsj.ixsq.nii.ac.jp/records/237576
https://ipsj.ixsq.nii.ac.jp/records/237576568946bf-51f3-423c-b489-a5d6c8683c9c
名前 / ファイル | ライセンス | アクション |
---|---|---|
![]()
2026年8月1日からダウンロード可能です。
|
Copyright (c) 2024 by the Information Processing Society of Japan
|
|
非会員:¥660, IPSJ:学会員:¥330, HPC:会員:¥0, DLIB:会員:¥0 |
Item type | SIG Technical Reports(1) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
公開日 | 2024-08-01 | |||||||||
タイトル | ||||||||||
タイトル | Enhancing Sparse DNN Inference on GPUs: Adaptive Tile Pruning and Split-Tiled Sparse Matrix Multiplication | |||||||||
タイトル | ||||||||||
言語 | en | |||||||||
タイトル | Enhancing Sparse DNN Inference on GPUs: Adaptive Tile Pruning and Split-Tiled Sparse Matrix Multiplication | |||||||||
言語 | ||||||||||
言語 | eng | |||||||||
キーワード | ||||||||||
主題Scheme | Other | |||||||||
主題 | 深層学習 | |||||||||
資源タイプ | ||||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_18gh | |||||||||
資源タイプ | technical report | |||||||||
著者所属 | ||||||||||
Graduate School of Information Science and Technology, Osaka University | ||||||||||
著者所属 | ||||||||||
Graduate School of Information Science and Technology, Osaka University | ||||||||||
著者所属(英) | ||||||||||
en | ||||||||||
Graduate School of Information Science and Technology, Osaka University | ||||||||||
著者所属(英) | ||||||||||
en | ||||||||||
Graduate School of Information Science and Technology, Osaka University | ||||||||||
著者名 |
Yanchen, Li
× Yanchen, Li
× Fumihiko, Ino
|
|||||||||
著者名(英) |
Yanchen, Li
× Yanchen, Li
× Fumihiko, Ino
|
|||||||||
論文抄録 | ||||||||||
内容記述タイプ | Other | |||||||||
内容記述 | Deep neural network (DNN) pruning is a popular method for accelerating computations in DNNs by removing unimportant parameters. Among pruning methods, tile-wise pruning (TWP) with sparse matrix multiplication achieves significant acceleration with minimal pruning loss. However, sparse matrix multiplication based on TWP suffers from load imbalance when important weight elements in the matrices of the DNN are unevenly distributed. To address this issue, we propose Adaptive Tile Pruning (ATP) and Split-Tiled Sparse Matrix Multiplication (STSpMM). ATP constructs sparse matrices with flexibly balanced workloads while preserving DNN model accuracy. Meanwhile, STSpMM efficiently handles ATP-generated sparse matrices on GPUs by splitting and redistributing large workloads. We evaluated our approach on pruned ResNet-34 model using ImageNet, and BERT-Small on QNLI tasks. Results demonstrate that ATP-pruned models processed via STSpMM achieve greater acceleration than previous methods while maintaining accuracy. | |||||||||
論文抄録(英) | ||||||||||
内容記述タイプ | Other | |||||||||
内容記述 | Deep neural network (DNN) pruning is a popular method for accelerating computations in DNNs by removing unimportant parameters. Among pruning methods, tile-wise pruning (TWP) with sparse matrix multiplication achieves significant acceleration with minimal pruning loss. However, sparse matrix multiplication based on TWP suffers from load imbalance when important weight elements in the matrices of the DNN are unevenly distributed. To address this issue, we propose Adaptive Tile Pruning (ATP) and Split-Tiled Sparse Matrix Multiplication (STSpMM). ATP constructs sparse matrices with flexibly balanced workloads while preserving DNN model accuracy. Meanwhile, STSpMM efficiently handles ATP-generated sparse matrices on GPUs by splitting and redistributing large workloads. We evaluated our approach on pruned ResNet-34 model using ImageNet, and BERT-Small on QNLI tasks. Results demonstrate that ATP-pruned models processed via STSpMM achieve greater acceleration than previous methods while maintaining accuracy. | |||||||||
書誌レコードID | ||||||||||
収録物識別子タイプ | NCID | |||||||||
収録物識別子 | AN10463942 | |||||||||
書誌情報 |
研究報告ハイパフォーマンスコンピューティング(HPC) 巻 2024-HPC-195, 号 15, p. 1-8, 発行日 2024-08-01 |
|||||||||
ISSN | ||||||||||
収録物識別子タイプ | ISSN | |||||||||
収録物識別子 | 2188-8841 | |||||||||
Notice | ||||||||||
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. | ||||||||||
出版者 | ||||||||||
言語 | ja | |||||||||
出版者 | 情報処理学会 |