ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. コンピュータビジョンとイメージメディア(CVIM)
  3. 2021
  4. 2021-CVIM-226

Behavior-based DNN Compression: Pruning and Facilitation Methods

https://ipsj.ixsq.nii.ac.jp/records/211157
https://ipsj.ixsq.nii.ac.jp/records/211157
8d9176df-b108-4a07-ae81-f3bd44a9515f
名前 / ファイル ライセンス アクション
IPSJ-CVIM21226010.pdf IPSJ-CVIM21226010.pdf (1.8 MB)
Copyright (c) 2021 by the Information Processing Society of Japan
オープンアクセス
Item type SIG Technical Reports(1)
公開日 2021-05-13
タイトル
タイトル Behavior-based DNN Compression: Pruning and Facilitation Methods
タイトル
言語 en
タイトル Behavior-based DNN Compression: Pruning and Facilitation Methods
言語
言語 eng
キーワード
主題Scheme Other
主題 D論セッション
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Wakayama University
著者所属
Wakayama University
著者所属(英)
en
Wakayama University
著者所属(英)
en
Wakayama University
著者名 Koji, Kamma

× Koji, Kamma

Koji, Kamma

Search repository
Toshikazu, Wada

× Toshikazu, Wada

Toshikazu, Wada

Search repository
著者名(英) Koji, Kamma

× Koji, Kamma

en Koji, Kamma

Search repository
Toshikazu, Wada

× Toshikazu, Wada

en Toshikazu, Wada

Search repository
論文抄録
内容記述タイプ Other
内容記述 In this paper, we present two pruning methods. Pruning is a technique to reduce the computational cost of Deep Neural Networks (DNNs) by removing redundant neurons. The proposed pruning methods are Neuro-Unification (NU) and Reconstruction Error Aware Pruning (REAP). These methods do not only prune but also conduct reconstruction to prevent accuracy degradation. In reconstruction step, we update the weights connected to the remaining neurons so as to compensate the error caused by pruning. Therefore, the models pruned by the pruning methods suffer smaller accuracy degradation. As REAP needs significant amount of computation for selecting the neurons to be pruned, we developed a biorthogonal system-based algorithm that reduces the computational order of neuron selection from O(n4) to O(n3), where n denotes the number of neurons. We also propose two methods for facilitating pruning, Pruning Ratio Optimizer (PRO) and Serialized Residual Network (SRN). As REAP performs pruning in each layer separately, it is important to tune the pruning ratio (the ratio of neurons to be pruned) in each layer properly in order to preserve the model accuracy better. PRO is a method that can be combined with REAP to tune pruning ratios based on the error in the final layer of the pruned DNN. SRN is to facilitate pruning for ResNet. Due to its identity shortcuts, some layers cannot be pruned. Therefore, we once convert ResNet into an equivalent serial DNN model, which we call SRN, so that pruning can be performed in any layer.
論文抄録(英)
内容記述タイプ Other
内容記述 In this paper, we present two pruning methods. Pruning is a technique to reduce the computational cost of Deep Neural Networks (DNNs) by removing redundant neurons. The proposed pruning methods are Neuro-Unification (NU) and Reconstruction Error Aware Pruning (REAP). These methods do not only prune but also conduct reconstruction to prevent accuracy degradation. In reconstruction step, we update the weights connected to the remaining neurons so as to compensate the error caused by pruning. Therefore, the models pruned by the pruning methods suffer smaller accuracy degradation. As REAP needs significant amount of computation for selecting the neurons to be pruned, we developed a biorthogonal system-based algorithm that reduces the computational order of neuron selection from O(n4) to O(n3), where n denotes the number of neurons. We also propose two methods for facilitating pruning, Pruning Ratio Optimizer (PRO) and Serialized Residual Network (SRN). As REAP performs pruning in each layer separately, it is important to tune the pruning ratio (the ratio of neurons to be pruned) in each layer properly in order to preserve the model accuracy better. PRO is a method that can be combined with REAP to tune pruning ratios based on the error in the final layer of the pruned DNN. SRN is to facilitate pruning for ResNet. Due to its identity shortcuts, some layers cannot be pruned. Therefore, we once convert ResNet into an equivalent serial DNN model, which we call SRN, so that pruning can be performed in any layer.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AA11131797
書誌情報 研究報告コンピュータビジョンとイメージメディア(CVIM)

巻 2021-CVIM-226, 号 10, p. 1-16, 発行日 2021-05-13
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8701
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 17:53:28.679656
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3