ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 論文誌(トランザクション)
  2. データベース(TOD)[電子情報通信学会データ工学研究専門委員会共同編集]
  3. Vol.16
  4. No.4

Scaling Private Deep Learning with Low-rank and Sparse Gradients

https://ipsj.ixsq.nii.ac.jp/records/228571
https://ipsj.ixsq.nii.ac.jp/records/228571
0fe71e52-0393-47fe-bbf0-4658368f73ae
名前 / ファイル ライセンス アクション
IPSJ-TOD1604003.pdf IPSJ-TOD1604003.pdf (1.1 MB)
 2025年10月19日からダウンロード可能です。
Copyright (c) 2023 by the Information Processing Society of Japan
非会員:¥0, IPSJ:学会員:¥0, DBS:会員:¥0, IFAT:会員:¥0, DLIB:会員:¥0
Item type Trans(1)
公開日 2023-10-19
タイトル
タイトル Scaling Private Deep Learning with Low-rank and Sparse Gradients
タイトル
言語 en
タイトル Scaling Private Deep Learning with Low-rank and Sparse Gradients
言語
言語 eng
キーワード
主題Scheme Other
主題 [研究論文] deep learning, differential privacy, stochastic gradient decent
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_6501
資源タイプ journal article
著者所属
Graduate School of Information Sciences and Technology, Osaka University
著者所属
LINE Corporation
著者所属
LINE Corporation
著者所属
Graduate School of Information Sciences and Technology, Osaka University
著者所属
Graduate School of Information Sciences and Technology, Osaka University
著者所属(英)
en
Graduate School of Information Sciences and Technology, Osaka University
著者所属(英)
en
LINE Corporation
著者所属(英)
en
LINE Corporation
著者所属(英)
en
Graduate School of Information Sciences and Technology, Osaka University
著者所属(英)
en
Graduate School of Information Sciences and Technology, Osaka University
著者名 Ryuichi, Ito

× Ryuichi, Ito

Ryuichi, Ito

Search repository
Seng, Pei Liew

× Seng, Pei Liew

Seng, Pei Liew

Search repository
Tsubasa, Takahashi

× Tsubasa, Takahashi

Tsubasa, Takahashi

Search repository
Yuya, Sasaki

× Yuya, Sasaki

Yuya, Sasaki

Search repository
Makoto, Onizuka

× Makoto, Onizuka

Makoto, Onizuka

Search repository
著者名(英) Ryuichi, Ito

× Ryuichi, Ito

en Ryuichi, Ito

Search repository
Seng, Pei Liew

× Seng, Pei Liew

en Seng, Pei Liew

Search repository
Tsubasa, Takahashi

× Tsubasa, Takahashi

en Tsubasa, Takahashi

Search repository
Yuya, Sasaki

× Yuya, Sasaki

en Yuya, Sasaki

Search repository
Makoto, Onizuka

× Makoto, Onizuka

en Makoto, Onizuka

Search repository
論文抄録
内容記述タイプ Other
内容記述 Applying Differentially Private Stochastic Gradient Descent (DPSGD) to training modern, large-scale neural networks such as transformer-based models is a challenging task, as the magnitude of noise added to the gradients at each iteration scale with model dimension, hindering the learning capability significantly. We propose a unified framework, LSG, that fully exploits the low-rank and sparse structure of neural networks to reduce the dimension of gradient updates, and hence alleviate the negative impacts of DPSGD. The gradient updates are first approximated with a pair of low-rank matrices. Then, a novel strategy is utilized to sparsify the gradients, resulting in low-dimensional, less noisy updates that are yet capable of retaining the performance of neural networks. Empirical evaluation on natural language processing and computer vision tasks shows that our method outperforms other state-of-the-art baselines.
------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.31(2023) (online)
------------------------------
論文抄録(英)
内容記述タイプ Other
内容記述 Applying Differentially Private Stochastic Gradient Descent (DPSGD) to training modern, large-scale neural networks such as transformer-based models is a challenging task, as the magnitude of noise added to the gradients at each iteration scale with model dimension, hindering the learning capability significantly. We propose a unified framework, LSG, that fully exploits the low-rank and sparse structure of neural networks to reduce the dimension of gradient updates, and hence alleviate the negative impacts of DPSGD. The gradient updates are first approximated with a pair of low-rank matrices. Then, a novel strategy is utilized to sparsify the gradients, resulting in low-dimensional, less noisy updates that are yet capable of retaining the performance of neural networks. Empirical evaluation on natural language processing and computer vision tasks shows that our method outperforms other state-of-the-art baselines.
------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.31(2023) (online)
------------------------------
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AA11464847
書誌情報 情報処理学会論文誌データベース(TOD)

巻 16, 号 4, 発行日 2023-10-19
ISSN
収録物識別子タイプ ISSN
収録物識別子 1882-7799
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 11:48:21.301234
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3