ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 論文誌(トランザクション)
  2. データベース(TOD)[電子情報通信学会データ工学研究専門委員会共同編集]
  3. Vol.17
  4. No.4

Measuring Local and Shuffled Privacy of Gradient Randomized Response

https://ipsj.ixsq.nii.ac.jp/records/240290
https://ipsj.ixsq.nii.ac.jp/records/240290
49499486-530f-4b22-ac5a-65ce40af5473
名前 / ファイル ライセンス アクション
IPSJ-TOD1704004.pdf IPSJ-TOD1704004.pdf (931.0 kB)
 2026年10月22日からダウンロード可能です。
Copyright (c) 2024 by the Information Processing Society of Japan
非会員:¥0, IPSJ:学会員:¥0, DBS:会員:¥0, IFAT:会員:¥0, DLIB:会員:¥0
Item type Trans(1)
公開日 2024-10-22
タイトル
タイトル Measuring Local and Shuffled Privacy of Gradient Randomized Response
タイトル
言語 en
タイトル Measuring Local and Shuffled Privacy of Gradient Randomized Response
言語
言語 eng
キーワード
主題Scheme Other
主題 [研究論文] local differential privacy, federated learning, privacy measurement
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_6501
資源タイプ journal article
著者所属
Ochanomizu University
著者所属
LY Corporation
著者所属
LY Corporation
著者所属
Ochanomizu University
著者所属(英)
en
Ochanomizu University
著者所属(英)
en
LY Corporation
著者所属(英)
en
LY Corporation
著者所属(英)
en
Ochanomizu University
著者名 Marin, Matsumoto

× Marin, Matsumoto

Marin, Matsumoto

Search repository
Tsubasa, Takahashi

× Tsubasa, Takahashi

Tsubasa, Takahashi

Search repository
Seng, Pei Liew

× Seng, Pei Liew

Seng, Pei Liew

Search repository
Masato, Oguchi

× Masato, Oguchi

Masato, Oguchi

Search repository
著者名(英) Marin, Matsumoto

× Marin, Matsumoto

en Marin, Matsumoto

Search repository
Tsubasa, Takahashi

× Tsubasa, Takahashi

en Tsubasa, Takahashi

Search repository
Seng, Pei Liew

× Seng, Pei Liew

en Seng, Pei Liew

Search repository
Masato, Oguchi

× Masato, Oguchi

en Masato, Oguchi

Search repository
論文抄録
内容記述タイプ Other
内容記述 Local differential privacy (LDP) provides a strong privacy guarantee in a distributed setting such as federated learning (FL). When a central curator deploys local randomizers satisfying ε0-LDP, how can we confirm and measure the given privacy guarantees at clients? To answer the above question, we introduce an empirical privacy test in FL clients by measuring the lower bounds of LDP, which gives us empirical ε0 and probability that the two gradients can be distinguished. To audit the given privacy guarantees (i.e., ε0), we first discover a worst-case scenario that reaches the theoretical upper bound of LDP, which is essential to empirically materialize the given privacy guarantees. We further instantiate several adversaries in FL under LDP to observe empirical LDP at various attack surfaces. The empirical privacy test with those adversary instantiations enables FL clients to understand how the given privacy level protects them more intuitively and verify that mechanisms claiming ε0-LDP provide equivalent privacy protection. We also demonstrate numerical observations of the measured privacy in these adversarial settings, and the randomization algorithm LDP-SGD is vulnerable to gradient manipulation and a maliciously well-manipulated model. We further discuss employing a shuffler to measure empirical privacy in a collaborative way and also measuring the privacy of the shuffled model. Our observation suggests that the theoretical ε in the shuffle model has room for improvement.
------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.32(2024) (online)
------------------------------
論文抄録(英)
内容記述タイプ Other
内容記述 Local differential privacy (LDP) provides a strong privacy guarantee in a distributed setting such as federated learning (FL). When a central curator deploys local randomizers satisfying ε0-LDP, how can we confirm and measure the given privacy guarantees at clients? To answer the above question, we introduce an empirical privacy test in FL clients by measuring the lower bounds of LDP, which gives us empirical ε0 and probability that the two gradients can be distinguished. To audit the given privacy guarantees (i.e., ε0), we first discover a worst-case scenario that reaches the theoretical upper bound of LDP, which is essential to empirically materialize the given privacy guarantees. We further instantiate several adversaries in FL under LDP to observe empirical LDP at various attack surfaces. The empirical privacy test with those adversary instantiations enables FL clients to understand how the given privacy level protects them more intuitively and verify that mechanisms claiming ε0-LDP provide equivalent privacy protection. We also demonstrate numerical observations of the measured privacy in these adversarial settings, and the randomization algorithm LDP-SGD is vulnerable to gradient manipulation and a maliciously well-manipulated model. We further discuss employing a shuffler to measure empirical privacy in a collaborative way and also measuring the privacy of the shuffled model. Our observation suggests that the theoretical ε in the shuffle model has room for improvement.
------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.32(2024) (online)
------------------------------
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AA11464847
書誌情報 情報処理学会論文誌データベース(TOD)

巻 17, 号 4, 発行日 2024-10-22
ISSN
収録物識別子タイプ ISSN
収録物識別子 1882-7799
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 08:01:26.996793
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3