WEKO3
アイテム
Measuring Local and Shuffled Privacy of Gradient Randomized Response
https://ipsj.ixsq.nii.ac.jp/records/240290
https://ipsj.ixsq.nii.ac.jp/records/24029049499486-530f-4b22-ac5a-65ce40af5473
| 名前 / ファイル | ライセンス | アクション |
|---|---|---|
|
2026年10月22日からダウンロード可能です。
|
Copyright (c) 2024 by the Information Processing Society of Japan
|
|
| 非会員:¥0, IPSJ:学会員:¥0, DBS:会員:¥0, IFAT:会員:¥0, DLIB:会員:¥0 | ||
| Item type | Trans(1) | |||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 公開日 | 2024-10-22 | |||||||||||||
| タイトル | ||||||||||||||
| タイトル | Measuring Local and Shuffled Privacy of Gradient Randomized Response | |||||||||||||
| タイトル | ||||||||||||||
| 言語 | en | |||||||||||||
| タイトル | Measuring Local and Shuffled Privacy of Gradient Randomized Response | |||||||||||||
| 言語 | ||||||||||||||
| 言語 | eng | |||||||||||||
| キーワード | ||||||||||||||
| 主題Scheme | Other | |||||||||||||
| 主題 | [研究論文] local differential privacy, federated learning, privacy measurement | |||||||||||||
| 資源タイプ | ||||||||||||||
| 資源タイプ識別子 | http://purl.org/coar/resource_type/c_6501 | |||||||||||||
| 資源タイプ | journal article | |||||||||||||
| 著者所属 | ||||||||||||||
| Ochanomizu University | ||||||||||||||
| 著者所属 | ||||||||||||||
| LY Corporation | ||||||||||||||
| 著者所属 | ||||||||||||||
| LY Corporation | ||||||||||||||
| 著者所属 | ||||||||||||||
| Ochanomizu University | ||||||||||||||
| 著者所属(英) | ||||||||||||||
| en | ||||||||||||||
| Ochanomizu University | ||||||||||||||
| 著者所属(英) | ||||||||||||||
| en | ||||||||||||||
| LY Corporation | ||||||||||||||
| 著者所属(英) | ||||||||||||||
| en | ||||||||||||||
| LY Corporation | ||||||||||||||
| 著者所属(英) | ||||||||||||||
| en | ||||||||||||||
| Ochanomizu University | ||||||||||||||
| 著者名 |
Marin, Matsumoto
× Marin, Matsumoto
× Tsubasa, Takahashi
× Seng, Pei Liew
× Masato, Oguchi
|
|||||||||||||
| 著者名(英) |
Marin, Matsumoto
× Marin, Matsumoto
× Tsubasa, Takahashi
× Seng, Pei Liew
× Masato, Oguchi
|
|||||||||||||
| 論文抄録 | ||||||||||||||
| 内容記述タイプ | Other | |||||||||||||
| 内容記述 | Local differential privacy (LDP) provides a strong privacy guarantee in a distributed setting such as federated learning (FL). When a central curator deploys local randomizers satisfying ε0-LDP, how can we confirm and measure the given privacy guarantees at clients? To answer the above question, we introduce an empirical privacy test in FL clients by measuring the lower bounds of LDP, which gives us empirical ε0 and probability that the two gradients can be distinguished. To audit the given privacy guarantees (i.e., ε0), we first discover a worst-case scenario that reaches the theoretical upper bound of LDP, which is essential to empirically materialize the given privacy guarantees. We further instantiate several adversaries in FL under LDP to observe empirical LDP at various attack surfaces. The empirical privacy test with those adversary instantiations enables FL clients to understand how the given privacy level protects them more intuitively and verify that mechanisms claiming ε0-LDP provide equivalent privacy protection. We also demonstrate numerical observations of the measured privacy in these adversarial settings, and the randomization algorithm LDP-SGD is vulnerable to gradient manipulation and a maliciously well-manipulated model. We further discuss employing a shuffler to measure empirical privacy in a collaborative way and also measuring the privacy of the shuffled model. Our observation suggests that the theoretical ε in the shuffle model has room for improvement. ------------------------------ This is a preprint of an article intended for publication Journal of Information Processing(JIP). This preprint should not be cited. This article should be cited as: Journal of Information Processing Vol.32(2024) (online) ------------------------------ |
|||||||||||||
| 論文抄録(英) | ||||||||||||||
| 内容記述タイプ | Other | |||||||||||||
| 内容記述 | Local differential privacy (LDP) provides a strong privacy guarantee in a distributed setting such as federated learning (FL). When a central curator deploys local randomizers satisfying ε0-LDP, how can we confirm and measure the given privacy guarantees at clients? To answer the above question, we introduce an empirical privacy test in FL clients by measuring the lower bounds of LDP, which gives us empirical ε0 and probability that the two gradients can be distinguished. To audit the given privacy guarantees (i.e., ε0), we first discover a worst-case scenario that reaches the theoretical upper bound of LDP, which is essential to empirically materialize the given privacy guarantees. We further instantiate several adversaries in FL under LDP to observe empirical LDP at various attack surfaces. The empirical privacy test with those adversary instantiations enables FL clients to understand how the given privacy level protects them more intuitively and verify that mechanisms claiming ε0-LDP provide equivalent privacy protection. We also demonstrate numerical observations of the measured privacy in these adversarial settings, and the randomization algorithm LDP-SGD is vulnerable to gradient manipulation and a maliciously well-manipulated model. We further discuss employing a shuffler to measure empirical privacy in a collaborative way and also measuring the privacy of the shuffled model. Our observation suggests that the theoretical ε in the shuffle model has room for improvement. ------------------------------ This is a preprint of an article intended for publication Journal of Information Processing(JIP). This preprint should not be cited. This article should be cited as: Journal of Information Processing Vol.32(2024) (online) ------------------------------ |
|||||||||||||
| 書誌レコードID | ||||||||||||||
| 収録物識別子タイプ | NCID | |||||||||||||
| 収録物識別子 | AA11464847 | |||||||||||||
| 書誌情報 |
情報処理学会論文誌データベース(TOD) 巻 17, 号 4, 発行日 2024-10-22 |
|||||||||||||
| ISSN | ||||||||||||||
| 収録物識別子タイプ | ISSN | |||||||||||||
| 収録物識別子 | 1882-7799 | |||||||||||||
| 出版者 | ||||||||||||||
| 言語 | ja | |||||||||||||
| 出版者 | 情報処理学会 | |||||||||||||