ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. システムとLSIの設計技術(SLDM)
  3. 2021
  4. 2021-SLDM-196

Sparsity-Gradient を用いた深層学習モデルの圧縮とVitis-AI への実装

https://ipsj.ixsq.nii.ac.jp/records/214015
https://ipsj.ixsq.nii.ac.jp/records/214015
5c7c5fb3-1504-486e-82a0-006f1cef6ec3
名前 / ファイル ライセンス アクション
IPSJ-SLDM21196006.pdf IPSJ-SLDM21196006.pdf (2.4 MB)
Copyright (c) 2021 by the Institute of Electronics, Information and Communication Engineers This SIG report is only available to those in membership of the SIG.
SLDM:会員:¥0, DLIB:会員:¥0
Item type SIG Technical Reports(1)
公開日 2021-11-24
タイトル
タイトル Sparsity-Gradient を用いた深層学習モデルの圧縮とVitis-AI への実装
タイトル
言語 en
タイトル Sparsity-Gradient-Based Pruning and the Vitis-AI Implementation for Compacting Deep Learning
言語
言語 eng
キーワード
主題Scheme Other
主題 機械学習
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
立命館大学理工学部電子情報工学科
著者所属
立命館大学理工学部電子情報工学科
著者所属
立命館大学理工学部電子情報工学科
著者所属(英)
en
Department of Electronic and Computer Engineering, College of Science and Engineering, Ritsumeikan University
著者所属(英)
en
Department of Electronic and Computer Engineering, College of Science and Engineering, Ritsumeikan University
著者所属(英)
en
Department of Electronic and Computer Engineering, College of Science and Engineering, Ritsumeikan University,
著者名 李, 恒毅

× 李, 恒毅

李, 恒毅

Search repository
岳, 学彬

× 岳, 学彬

岳, 学彬

Search repository
孟, 林

× 孟, 林

孟, 林

Search repository
著者名(英) Hengyi, Li

× Hengyi, Li

en Hengyi, Li

Search repository
Xuebin, Yue

× Xuebin, Yue

en Xuebin, Yue

Search repository
Lin, Meng

× Lin, Meng

en Lin, Meng

Search repository
論文抄録
内容記述タイプ Other
内容記述 The paper proposes a Sparsity-Gradient-Based layer-wise Pruning technique for compacting deep neural networks and accelerates the network by the Vitis AI on the Xilinx FPGA platform. The experimental results show that nearly 99.67% of parameters and 97.91% floating-point operations are pruned with only 1.2% accuracy decreased. With the support of Vitis AI, which offers a solution for adaptable and real-time AI inference acceleration, the pruned model is quantized and implemented on FPGA. The inference process achieves the throughput of 237.80 floating-point operations per second and running time of 4.21ms concerning VGG13BN, about 10 × speedup compared with the original model at single-thread mode. The paper also makes an in-depth analysis of the efficiency and utilization of the inference implementation, including the layer-wise workloads, running time, memory consumption, and so on. With the comprehensive analysis of the model deployed on FPGA, we plan to make further efforts to design the acceleration engine on hardware level by utilizing the potential of FPGA.
論文抄録(英)
内容記述タイプ Other
内容記述 The paper proposes a Sparsity-Gradient-Based layer-wise Pruning technique for compacting deep neural networks and accelerates the network by the Vitis AI on the Xilinx FPGA platform. The experimental results show that nearly 99.67% of parameters and 97.91% floating-point operations are pruned with only 1.2% accuracy decreased. With the support of Vitis AI, which offers a solution for adaptable and real-time AI inference acceleration, the pruned model is quantized and implemented on FPGA. The inference process achieves the throughput of 237.80 floating-point operations per second and running time of 4.21ms concerning VGG13BN, about 10 × speedup compared with the original model at single-thread mode. The paper also makes an in-depth analysis of the efficiency and utilization of the inference implementation, including the layer-wise workloads, running time, memory consumption, and so on. With the comprehensive analysis of the model deployed on FPGA, we plan to make further efforts to design the acceleration engine on hardware level by utilizing the potential of FPGA.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AA11451459
書誌情報 研究報告システムとLSIの設計技術(SLDM)

巻 2021-SLDM-196, 号 6, p. 1-6, 発行日 2021-11-24
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8639
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 16:56:04.598436
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3