| Item type |
SIG Technical Reports(1) |
| 公開日 |
2021-02-25 |
| タイトル |
|
|
タイトル |
Towards Adversarial Robustness of Learning in the Frequency Domain |
| タイトル |
|
|
言語 |
en |
|
タイトル |
Towards Adversarial Robustness of Learning in the Frequency Domain |
| 言語 |
|
|
言語 |
eng |
| キーワード |
|
|
主題Scheme |
Other |
|
主題 |
セッション6-2 |
| 資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
| 著者所属 |
|
|
|
Department of Information and Communication Engineering, The University of Tokyo |
| 著者所属 |
|
|
|
Department of Information and Communication Engineering, The University of Tokyo |
| 著者所属(英) |
|
|
|
en |
|
|
Department of Information and Communication Engineering, The University of Tokyo |
| 著者所属(英) |
|
|
|
en |
|
|
Department of Information and Communication Engineering, The University of Tokyo |
| 著者名 |
Subhajit, Chaudhury
Toshihiko, Yamasaki
|
| 著者名(英) |
Subhajit, Chaudhury
Toshihiko, Yamasaki
|
| 論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Adversarial attacks study the effect of noise on the robustness of Convolutional Neural Networks (CNNs). Typically, these works have shown that CNNs can be easily fooled by simply adding small imperceptible noise in the RGB color space that cannot be detected by humans. In this paper, we study the effect of adversarial attacks in the frequency domain and show that such attacks are rendered weaker due to frequency domain transformations. We argue that learning CNNs in the frequency domain disentangles frequencies corresponding to semantic and adversarial features. Due to this property, CNNs learned in the frequency domain can selectively put less focus on the adversarial features resulting in a robust performance in the presence of adversarial noise. We performed experiments on multiple datasets and show that CNNs trained on Discrete Cosine Transform (DCT) inputs show significantly better noise robustness to many varieties of adversarial noise compared to standard CNNs learned on RGB/Grayscale input. From this result, we urge the research community towards exploring frequency domain learning as a potential novel area to improve neural network robustness to test-time noise. |
| 論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Adversarial attacks study the effect of noise on the robustness of Convolutional Neural Networks (CNNs). Typically, these works have shown that CNNs can be easily fooled by simply adding small imperceptible noise in the RGB color space that cannot be detected by humans. In this paper, we study the effect of adversarial attacks in the frequency domain and show that such attacks are rendered weaker due to frequency domain transformations. We argue that learning CNNs in the frequency domain disentangles frequencies corresponding to semantic and adversarial features. Due to this property, CNNs learned in the frequency domain can selectively put less focus on the adversarial features resulting in a robust performance in the presence of adversarial noise. We performed experiments on multiple datasets and show that CNNs trained on Discrete Cosine Transform (DCT) inputs show significantly better noise robustness to many varieties of adversarial noise compared to standard CNNs learned on RGB/Grayscale input. From this result, we urge the research community towards exploring frequency domain learning as a potential novel area to improve neural network robustness to test-time noise. |
| 書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA11131797 |
| 書誌情報 |
研究報告コンピュータビジョンとイメージメディア(CVIM)
巻 2021-CVIM-225,
号 49,
p. 1-5,
発行日 2021-02-25
|
| ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8701 |
| Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
| 出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |