ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. ハイパフォーマンスコンピューティング(HPC)
  3. 2024
  4. 2024-HPC-195

Enhancing Sparse DNN Inference on GPUs: Adaptive Tile Pruning and Split-Tiled Sparse Matrix Multiplication

https://ipsj.ixsq.nii.ac.jp/records/237576
https://ipsj.ixsq.nii.ac.jp/records/237576
568946bf-51f3-423c-b489-a5d6c8683c9c
名前 / ファイル ライセンス アクション
IPSJ-HPC24195015.pdf IPSJ-HPC24195015.pdf (2.1 MB)
 2026年8月1日からダウンロード可能です。
Copyright (c) 2024 by the Information Processing Society of Japan
非会員:¥660, IPSJ:学会員:¥330, HPC:会員:¥0, DLIB:会員:¥0
Item type SIG Technical Reports(1)
公開日 2024-08-01
タイトル
タイトル Enhancing Sparse DNN Inference on GPUs: Adaptive Tile Pruning and Split-Tiled Sparse Matrix Multiplication
タイトル
言語 en
タイトル Enhancing Sparse DNN Inference on GPUs: Adaptive Tile Pruning and Split-Tiled Sparse Matrix Multiplication
言語
言語 eng
キーワード
主題Scheme Other
主題 深層学習
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Graduate School of Information Science and Technology, Osaka University
著者所属
Graduate School of Information Science and Technology, Osaka University
著者所属(英)
en
Graduate School of Information Science and Technology, Osaka University
著者所属(英)
en
Graduate School of Information Science and Technology, Osaka University
著者名 Yanchen, Li

× Yanchen, Li

Yanchen, Li

Search repository
Fumihiko, Ino

× Fumihiko, Ino

Fumihiko, Ino

Search repository
著者名(英) Yanchen, Li

× Yanchen, Li

en Yanchen, Li

Search repository
Fumihiko, Ino

× Fumihiko, Ino

en Fumihiko, Ino

Search repository
論文抄録
内容記述タイプ Other
内容記述 Deep neural network (DNN) pruning is a popular method for accelerating computations in DNNs by removing unimportant parameters. Among pruning methods, tile-wise pruning (TWP) with sparse matrix multiplication achieves significant acceleration with minimal pruning loss. However, sparse matrix multiplication based on TWP suffers from load imbalance when important weight elements in the matrices of the DNN are unevenly distributed. To address this issue, we propose Adaptive Tile Pruning (ATP) and Split-Tiled Sparse Matrix Multiplication (STSpMM). ATP constructs sparse matrices with flexibly balanced workloads while preserving DNN model accuracy. Meanwhile, STSpMM efficiently handles ATP-generated sparse matrices on GPUs by splitting and redistributing large workloads. We evaluated our approach on pruned ResNet-34 model using ImageNet, and BERT-Small on QNLI tasks. Results demonstrate that ATP-pruned models processed via STSpMM achieve greater acceleration than previous methods while maintaining accuracy.
論文抄録(英)
内容記述タイプ Other
内容記述 Deep neural network (DNN) pruning is a popular method for accelerating computations in DNNs by removing unimportant parameters. Among pruning methods, tile-wise pruning (TWP) with sparse matrix multiplication achieves significant acceleration with minimal pruning loss. However, sparse matrix multiplication based on TWP suffers from load imbalance when important weight elements in the matrices of the DNN are unevenly distributed. To address this issue, we propose Adaptive Tile Pruning (ATP) and Split-Tiled Sparse Matrix Multiplication (STSpMM). ATP constructs sparse matrices with flexibly balanced workloads while preserving DNN model accuracy. Meanwhile, STSpMM efficiently handles ATP-generated sparse matrices on GPUs by splitting and redistributing large workloads. We evaluated our approach on pruned ResNet-34 model using ImageNet, and BERT-Small on QNLI tasks. Results demonstrate that ATP-pruned models processed via STSpMM achieve greater acceleration than previous methods while maintaining accuracy.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN10463942
書誌情報 研究報告ハイパフォーマンスコンピューティング(HPC)

巻 2024-HPC-195, 号 15, p. 1-8, 発行日 2024-08-01
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8841
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 08:50:20.424706
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3