Item type |
SIG Technical Reports(1) |
公開日 |
2021-05-13 |
タイトル |
|
|
タイトル |
Behavior-based DNN Compression: Pruning and Facilitation Methods |
タイトル |
|
|
言語 |
en |
|
タイトル |
Behavior-based DNN Compression: Pruning and Facilitation Methods |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
D論セッション |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
Wakayama University |
著者所属 |
|
|
|
Wakayama University |
著者所属(英) |
|
|
|
en |
|
|
Wakayama University |
著者所属(英) |
|
|
|
en |
|
|
Wakayama University |
著者名 |
Koji, Kamma
Toshikazu, Wada
|
著者名(英) |
Koji, Kamma
Toshikazu, Wada
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
In this paper, we present two pruning methods. Pruning is a technique to reduce the computational cost of Deep Neural Networks (DNNs) by removing redundant neurons. The proposed pruning methods are Neuro-Unification (NU) and Reconstruction Error Aware Pruning (REAP). These methods do not only prune but also conduct reconstruction to prevent accuracy degradation. In reconstruction step, we update the weights connected to the remaining neurons so as to compensate the error caused by pruning. Therefore, the models pruned by the pruning methods suffer smaller accuracy degradation. As REAP needs significant amount of computation for selecting the neurons to be pruned, we developed a biorthogonal system-based algorithm that reduces the computational order of neuron selection from O(n4) to O(n3), where n denotes the number of neurons. We also propose two methods for facilitating pruning, Pruning Ratio Optimizer (PRO) and Serialized Residual Network (SRN). As REAP performs pruning in each layer separately, it is important to tune the pruning ratio (the ratio of neurons to be pruned) in each layer properly in order to preserve the model accuracy better. PRO is a method that can be combined with REAP to tune pruning ratios based on the error in the final layer of the pruned DNN. SRN is to facilitate pruning for ResNet. Due to its identity shortcuts, some layers cannot be pruned. Therefore, we once convert ResNet into an equivalent serial DNN model, which we call SRN, so that pruning can be performed in any layer. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
In this paper, we present two pruning methods. Pruning is a technique to reduce the computational cost of Deep Neural Networks (DNNs) by removing redundant neurons. The proposed pruning methods are Neuro-Unification (NU) and Reconstruction Error Aware Pruning (REAP). These methods do not only prune but also conduct reconstruction to prevent accuracy degradation. In reconstruction step, we update the weights connected to the remaining neurons so as to compensate the error caused by pruning. Therefore, the models pruned by the pruning methods suffer smaller accuracy degradation. As REAP needs significant amount of computation for selecting the neurons to be pruned, we developed a biorthogonal system-based algorithm that reduces the computational order of neuron selection from O(n4) to O(n3), where n denotes the number of neurons. We also propose two methods for facilitating pruning, Pruning Ratio Optimizer (PRO) and Serialized Residual Network (SRN). As REAP performs pruning in each layer separately, it is important to tune the pruning ratio (the ratio of neurons to be pruned) in each layer properly in order to preserve the model accuracy better. PRO is a method that can be combined with REAP to tune pruning ratios based on the error in the final layer of the pruned DNN. SRN is to facilitate pruning for ResNet. Due to its identity shortcuts, some layers cannot be pruned. Therefore, we once convert ResNet into an equivalent serial DNN model, which we call SRN, so that pruning can be performed in any layer. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10115061 |
書誌情報 |
研究報告自然言語処理(NL)
巻 2021-NL-248,
号 10,
p. 1-16,
発行日 2021-05-13
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8779 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |