ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング


インデックスリンク

インデックスツリー

  • RootNode

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. オーディオビジュアル複合情報処理(AVM)
  3. 2020
  4. 2020-AVM-108

A Study on Sparsity Learning for Neural Network Acceleration

https://ipsj.ixsq.nii.ac.jp/records/203412
https://ipsj.ixsq.nii.ac.jp/records/203412
1012fdb3-9ba9-498b-a173-abd834677130
名前 / ファイル ライセンス アクション
IPSJ-AVM20108010.pdf IPSJ-AVM20108010.pdf (801.8 kB)
Copyright (c) 2020 by the Information Processing Society of Japan
オープンアクセス
Item type SIG Technical Reports(1)
公開日 2020-02-20
タイトル
タイトル A Study on Sparsity Learning for Neural Network Acceleration
タイトル
言語 en
タイトル A Study on Sparsity Learning for Neural Network Acceleration
言語
言語 eng
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
KDDI Research, Inc.
著者所属
KDDI Research, Inc.
著者所属(英)
en
KDDI Research, Inc.
著者所属(英)
en
KDDI Research, Inc.
著者名 Jianfeng, Xu

× Jianfeng, Xu

Jianfeng, Xu

Search repository
Kazuyuki, Tasaka

× Kazuyuki, Tasaka

Kazuyuki, Tasaka

Search repository
著者名(英) Jianfeng, Xu

× Jianfeng, Xu

en Jianfeng, Xu

Search repository
Kazuyuki, Tasaka

× Kazuyuki, Tasaka

en Kazuyuki, Tasaka

Search repository
論文抄録
内容記述タイプ Other
内容記述 A sparsity learning framework is effective as they learn and prune the models in an end-to-end data-driven manner. However, existing works impose the same sparsity regularization on all filters indiscriminately, which can hardly result in an optimal structure-sparse network. In this paper, we propose a Saliency-Adaptive Sparsity Learning (SASL) approach for further optimization. The saliency of each filter is measured from two aspects: the importance for the prediction performance and the consumed computational resources. During sparsity learning, the regularization is adjusted according to the saliency, so our optimized format can better preserve the prediction performance while zeroing out more computation-heavy filters. The calculation for saliency introduces minimum overhead to the training process, which means our SASL is very efficient. During the pruning phase, in order to optimize the proposed data-dependent criterion, a hard sample mining strategy is utilized, which shows higher effectiveness and efficiency. Extensive experiments demonstrate the superior performance of our method.
論文抄録(英)
内容記述タイプ Other
内容記述 A sparsity learning framework is effective as they learn and prune the models in an end-to-end data-driven manner. However, existing works impose the same sparsity regularization on all filters indiscriminately, which can hardly result in an optimal structure-sparse network. In this paper, we propose a Saliency-Adaptive Sparsity Learning (SASL) approach for further optimization. The saliency of each filter is measured from two aspects: the importance for the prediction performance and the consumed computational resources. During sparsity learning, the regularization is adjusted according to the saliency, so our optimized format can better preserve the prediction performance while zeroing out more computation-heavy filters. The calculation for saliency introduces minimum overhead to the training process, which means our SASL is very efficient. During the pruning phase, in order to optimize the proposed data-dependent criterion, a hard sample mining strategy is utilized, which shows higher effectiveness and efficiency. Extensive experiments demonstrate the superior performance of our method.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN10438399
書誌情報 研究報告オーディオビジュアル複合情報処理(AVM)

巻 2020-AVM-108, 号 10, p. 1-6, 発行日 2020-02-20
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8582
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 20:33:19.705569
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

Jianfeng, Xu, Kazuyuki, Tasaka, 2020: 情報処理学会, 1–6 p.

Loading...

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3