ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. コンピュータビジョンとイメージメディア(CVIM)
  3. 2024
  4. 2024-CVIM-237

閾値操作による公平な分類アルゴリズムの公平性に対する毒攻撃

https://ipsj.ixsq.nii.ac.jp/records/232730
https://ipsj.ixsq.nii.ac.jp/records/232730
1ce8714e-b7ce-49dc-8a07-d87cd00bb39d
名前 / ファイル ライセンス アクション
IPSJ-CVIM24237039.pdf IPSJ-CVIM24237039.pdf (1.1 MB)
Copyright (c) 2024 by the Institute of Electronics, Information and Communication Engineers This SIG report is only available to those in membership of the SIG.
CVIM:会員:¥0, DLIB:会員:¥0
Item type SIG Technical Reports(1)
公開日 2024-02-25
タイトル
タイトル 閾値操作による公平な分類アルゴリズムの公平性に対する毒攻撃
タイトル
言語 en
タイトル Poisoning Attack on Fairness of Fair Classification Algorithm through Threshold Control.
言語
言語 eng
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
筑波大学
著者所属
筑波大学/ 理化学研究所
著者所属
東京工業大学/理化学研究所
著者所属
筑波大学/ 理化学研究所
著者所属(英)
en
University of Tsukuba
著者所属(英)
en
University of Tsukuba / RIKEN
著者所属(英)
en
Tokyo Institute of Technology / RIKEN
著者所属(英)
en
University of Tsukuba / RIKEN
著者名 ダイ, ショウテン

× ダイ, ショウテン

ダイ, ショウテン

Search repository
秋本, 洋平

× 秋本, 洋平

秋本, 洋平

Search repository
佐久間, 淳

× 佐久間, 淳

佐久間, 淳

Search repository
福地, 一斗

× 福地, 一斗

福地, 一斗

Search repository
著者名(英) Shengtian, Dai

× Shengtian, Dai

en Shengtian, Dai

Search repository
Youhei, Akimoto

× Youhei, Akimoto

en Youhei, Akimoto

Search repository
Jun, Sakuma

× Jun, Sakuma

en Jun, Sakuma

Search repository
Kazuto, Fukuchi

× Kazuto, Fukuchi

en Kazuto, Fukuchi

Search repository
論文抄録
内容記述タイプ Other
内容記述 The ethical issues of artificial intelligence have become more severe as machine learning is widely used in several fields. Recent developments in machine learning enable machine learning algorithms to mitigate the ethical issue of fairness, a problem of discriminating output against sensitive attributes, including gender and race. However, a poisoning attack, originally designed to harm the accuracy of models, can also introduce unfair bias into models. Our research investigated the attack of worsening fairness of the fair learning models, finding how the fair learning models behave under the fairness attack. Specifically, we construct an attack strategy, TAF, targeting the fairness of the fair learning model by controlling the thresholds involved in the model and elucidate its behavior. The experimental results demonstrate that TAF does more harm to the fairness of the fair learning model than the attack methods proposed in existing studies.
論文抄録(英)
内容記述タイプ Other
内容記述 The ethical issues of artificial intelligence have become more severe as machine learning is widely used in several fields. Recent developments in machine learning enable machine learning algorithms to mitigate the ethical issue of fairness, a problem of discriminating output against sensitive attributes, including gender and race. However, a poisoning attack, originally designed to harm the accuracy of models, can also introduce unfair bias into models. Our research investigated the attack of worsening fairness of the fair learning models, finding how the fair learning models behave under the fairness attack. Specifically, we construct an attack strategy, TAF, targeting the fairness of the fair learning model by controlling the thresholds involved in the model and elucidate its behavior. The experimental results demonstrate that TAF does more harm to the fairness of the fair learning model than the attack methods proposed in existing studies.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AA11131797
書誌情報 研究報告コンピュータビジョンとイメージメディア(CVIM)

巻 2024-CVIM-237, 号 39, p. 1-8, 発行日 2024-02-25
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8701
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 10:20:24.049735
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3