ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. コンピュータビジョンとイメージメディア(CVIM)
  3. 2021
  4. 2021-CVIM-225

Towards Adversarial Robustness of Learning in the Frequency Domain

https://ipsj.ixsq.nii.ac.jp/records/209847
https://ipsj.ixsq.nii.ac.jp/records/209847
de931413-bc9d-44d5-8c29-ba7b388e9710
名前 / ファイル ライセンス アクション
IPSJ-CVIM21225049.pdf IPSJ-CVIM21225049.pdf (2.1 MB)
Copyright (c) 2021 by the Institute of Electronics, Information and Communication Engineers This SIG report is only available to those in membership of the SIG.
CVIM:会員:¥0, DLIB:会員:¥0
Item type SIG Technical Reports(1)
公開日 2021-02-25
タイトル
タイトル Towards Adversarial Robustness of Learning in the Frequency Domain
タイトル
言語 en
タイトル Towards Adversarial Robustness of Learning in the Frequency Domain
言語
言語 eng
キーワード
主題Scheme Other
主題 セッション6-2
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Department of Information and Communication Engineering, The University of Tokyo
著者所属
Department of Information and Communication Engineering, The University of Tokyo
著者所属(英)
en
Department of Information and Communication Engineering, The University of Tokyo
著者所属(英)
en
Department of Information and Communication Engineering, The University of Tokyo
著者名 Subhajit, Chaudhury

× Subhajit, Chaudhury

Subhajit, Chaudhury

Search repository
Toshihiko, Yamasaki

× Toshihiko, Yamasaki

Toshihiko, Yamasaki

Search repository
著者名(英) Subhajit, Chaudhury

× Subhajit, Chaudhury

en Subhajit, Chaudhury

Search repository
Toshihiko, Yamasaki

× Toshihiko, Yamasaki

en Toshihiko, Yamasaki

Search repository
論文抄録
内容記述タイプ Other
内容記述 Adversarial attacks study the effect of noise on the robustness of Convolutional Neural Networks (CNNs). Typically, these works have shown that CNNs can be easily fooled by simply adding small imperceptible noise in the RGB color space that cannot be detected by humans. In this paper, we study the effect of adversarial attacks in the frequency domain and show that such attacks are rendered weaker due to frequency domain transformations. We argue that learning CNNs in the frequency domain disentangles frequencies corresponding to semantic and adversarial features. Due to this property, CNNs learned in the frequency domain can selectively put less focus on the adversarial features resulting in a robust performance in the presence of adversarial noise. We performed experiments on multiple datasets and show that CNNs trained on Discrete Cosine Transform (DCT) inputs show significantly better noise robustness to many varieties of adversarial noise compared to standard CNNs learned on RGB/Grayscale input. From this result, we urge the research community towards exploring frequency domain learning as a potential novel area to improve neural network robustness to test-time noise.
論文抄録(英)
内容記述タイプ Other
内容記述 Adversarial attacks study the effect of noise on the robustness of Convolutional Neural Networks (CNNs). Typically, these works have shown that CNNs can be easily fooled by simply adding small imperceptible noise in the RGB color space that cannot be detected by humans. In this paper, we study the effect of adversarial attacks in the frequency domain and show that such attacks are rendered weaker due to frequency domain transformations. We argue that learning CNNs in the frequency domain disentangles frequencies corresponding to semantic and adversarial features. Due to this property, CNNs learned in the frequency domain can selectively put less focus on the adversarial features resulting in a robust performance in the presence of adversarial noise. We performed experiments on multiple datasets and show that CNNs trained on Discrete Cosine Transform (DCT) inputs show significantly better noise robustness to many varieties of adversarial noise compared to standard CNNs learned on RGB/Grayscale input. From this result, we urge the research community towards exploring frequency domain learning as a potential novel area to improve neural network robustness to test-time noise.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AA11131797
書誌情報 研究報告コンピュータビジョンとイメージメディア(CVIM)

巻 2021-CVIM-225, 号 49, p. 1-5, 発行日 2021-02-25
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8701
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 18:22:17.248102
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3