Item type |
SIG Technical Reports(1) |
公開日 |
2024-02-25 |
タイトル |
|
|
タイトル |
閾値操作による公平な分類アルゴリズムの公平性に対する毒攻撃 |
タイトル |
|
|
言語 |
en |
|
タイトル |
Poisoning Attack on Fairness of Fair Classification Algorithm through Threshold Control. |
言語 |
|
|
言語 |
eng |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
筑波大学 |
著者所属 |
|
|
|
筑波大学/ 理化学研究所 |
著者所属 |
|
|
|
東京工業大学/理化学研究所 |
著者所属 |
|
|
|
筑波大学/ 理化学研究所 |
著者所属(英) |
|
|
|
en |
|
|
University of Tsukuba |
著者所属(英) |
|
|
|
en |
|
|
University of Tsukuba / RIKEN |
著者所属(英) |
|
|
|
en |
|
|
Tokyo Institute of Technology / RIKEN |
著者所属(英) |
|
|
|
en |
|
|
University of Tsukuba / RIKEN |
著者名 |
ダイ, ショウテン
秋本, 洋平
佐久間, 淳
福地, 一斗
|
著者名(英) |
Shengtian, Dai
Youhei, Akimoto
Jun, Sakuma
Kazuto, Fukuchi
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
The ethical issues of artificial intelligence have become more severe as machine learning is widely used in several fields. Recent developments in machine learning enable machine learning algorithms to mitigate the ethical issue of fairness, a problem of discriminating output against sensitive attributes, including gender and race. However, a poisoning attack, originally designed to harm the accuracy of models, can also introduce unfair bias into models. Our research investigated the attack of worsening fairness of the fair learning models, finding how the fair learning models behave under the fairness attack. Specifically, we construct an attack strategy, TAF, targeting the fairness of the fair learning model by controlling the thresholds involved in the model and elucidate its behavior. The experimental results demonstrate that TAF does more harm to the fairness of the fair learning model than the attack methods proposed in existing studies. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
The ethical issues of artificial intelligence have become more severe as machine learning is widely used in several fields. Recent developments in machine learning enable machine learning algorithms to mitigate the ethical issue of fairness, a problem of discriminating output against sensitive attributes, including gender and race. However, a poisoning attack, originally designed to harm the accuracy of models, can also introduce unfair bias into models. Our research investigated the attack of worsening fairness of the fair learning models, finding how the fair learning models behave under the fairness attack. Specifically, we construct an attack strategy, TAF, targeting the fairness of the fair learning model by controlling the thresholds involved in the model and elucidate its behavior. The experimental results demonstrate that TAF does more harm to the fairness of the fair learning model than the attack methods proposed in existing studies. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA11131797 |
書誌情報 |
研究報告コンピュータビジョンとイメージメディア(CVIM)
巻 2024-CVIM-237,
号 39,
p. 1-8,
発行日 2024-02-25
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8701 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |