ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング


インデックスリンク

インデックスツリー

  • RootNode

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. 自然言語処理(NL)
  3. 2021
  4. 2021-NL-248

Behavior-based DNN Compression: Pruning and Facilitation Methods

https://ipsj.ixsq.nii.ac.jp/records/211110
https://ipsj.ixsq.nii.ac.jp/records/211110
f10620a5-8e81-4c63-ac65-187a8a5d5b9c
名前 / ファイル ライセンス アクション
IPSJ-NL21248010.pdf IPSJ-NL21248010.pdf (1.8 MB)
Copyright (c) 2021 by the Information Processing Society of Japan
オープンアクセス
Item type SIG Technical Reports(1)
公開日 2021-05-13
タイトル
タイトル Behavior-based DNN Compression: Pruning and Facilitation Methods
タイトル
言語 en
タイトル Behavior-based DNN Compression: Pruning and Facilitation Methods
言語
言語 eng
キーワード
主題Scheme Other
主題 D論セッション
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Wakayama University
著者所属
Wakayama University
著者所属(英)
en
Wakayama University
著者所属(英)
en
Wakayama University
著者名 Koji, Kamma

× Koji, Kamma

Koji, Kamma

Search repository
Toshikazu, Wada

× Toshikazu, Wada

Toshikazu, Wada

Search repository
著者名(英) Koji, Kamma

× Koji, Kamma

en Koji, Kamma

Search repository
Toshikazu, Wada

× Toshikazu, Wada

en Toshikazu, Wada

Search repository
論文抄録
内容記述タイプ Other
内容記述 In this paper, we present two pruning methods. Pruning is a technique to reduce the computational cost of Deep Neural Networks (DNNs) by removing redundant neurons. The proposed pruning methods are Neuro-Unification (NU) and Reconstruction Error Aware Pruning (REAP). These methods do not only prune but also conduct reconstruction to prevent accuracy degradation. In reconstruction step, we update the weights connected to the remaining neurons so as to compensate the error caused by pruning. Therefore, the models pruned by the pruning methods suffer smaller accuracy degradation. As REAP needs significant amount of computation for selecting the neurons to be pruned, we developed a biorthogonal system-based algorithm that reduces the computational order of neuron selection from O(n4) to O(n3), where n denotes the number of neurons. We also propose two methods for facilitating pruning, Pruning Ratio Optimizer (PRO) and Serialized Residual Network (SRN). As REAP performs pruning in each layer separately, it is important to tune the pruning ratio (the ratio of neurons to be pruned) in each layer properly in order to preserve the model accuracy better. PRO is a method that can be combined with REAP to tune pruning ratios based on the error in the final layer of the pruned DNN. SRN is to facilitate pruning for ResNet. Due to its identity shortcuts, some layers cannot be pruned. Therefore, we once convert ResNet into an equivalent serial DNN model, which we call SRN, so that pruning can be performed in any layer.
論文抄録(英)
内容記述タイプ Other
内容記述 In this paper, we present two pruning methods. Pruning is a technique to reduce the computational cost of Deep Neural Networks (DNNs) by removing redundant neurons. The proposed pruning methods are Neuro-Unification (NU) and Reconstruction Error Aware Pruning (REAP). These methods do not only prune but also conduct reconstruction to prevent accuracy degradation. In reconstruction step, we update the weights connected to the remaining neurons so as to compensate the error caused by pruning. Therefore, the models pruned by the pruning methods suffer smaller accuracy degradation. As REAP needs significant amount of computation for selecting the neurons to be pruned, we developed a biorthogonal system-based algorithm that reduces the computational order of neuron selection from O(n4) to O(n3), where n denotes the number of neurons. We also propose two methods for facilitating pruning, Pruning Ratio Optimizer (PRO) and Serialized Residual Network (SRN). As REAP performs pruning in each layer separately, it is important to tune the pruning ratio (the ratio of neurons to be pruned) in each layer properly in order to preserve the model accuracy better. PRO is a method that can be combined with REAP to tune pruning ratios based on the error in the final layer of the pruned DNN. SRN is to facilitate pruning for ResNet. Due to its identity shortcuts, some layers cannot be pruned. Therefore, we once convert ResNet into an equivalent serial DNN model, which we call SRN, so that pruning can be performed in any layer.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN10115061
書誌情報 研究報告自然言語処理(NL)

巻 2021-NL-248, 号 10, p. 1-16, 発行日 2021-05-13
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8779
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 17:54:39.980945
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

Koji, Kamma, Toshikazu, Wada, 2021: 情報処理学会, 1–16 p.

Loading...

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3