Item type |
Symposium(1) |
公開日 |
2019-10-14 |
タイトル |
|
|
タイトル |
A Label-Based System for Detecting Adversarial Examples by Using Low Pass Filters |
タイトル |
|
|
言語 |
en |
|
タイトル |
A Label-Based System for Detecting Adversarial Examples by Using Low Pass Filters |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
Deep Neural Networks,Adversarial Examples,Low pass filter |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_5794 |
|
資源タイプ |
conference paper |
著者所属 |
|
|
|
情報セキユリテイ大学院大学/University of Danang |
著者所属 |
|
|
|
情報セキユリテイ大学院大学 |
著者所属 |
|
|
|
情報セキユリテイ大学院大学 |
著者所属(英) |
|
|
|
en |
|
|
Institute of Information Security / University of Danang |
著者所属(英) |
|
|
|
en |
|
|
Institute of Information Security |
著者所属(英) |
|
|
|
en |
|
|
Institute of Information Security |
著者名 |
ダンデユイ, タン
近藤, 大生
松井, 俊浩
|
著者名(英) |
Thang, Dang Duy
Taisei, Kondo
Toshihiro, Matsui
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Along with significant improvements in deep neural networks, image classification tasks are solved with extremely high accuracy rates. However, deep neural networks have been recently found vulnerable to well-designed input samples that called adversarial examples. Such an issue causes deep neural networks to misclassify adversarial examples that are imperceptible to humans. Distinguishing adversarial images and legitimate images are tough challenges. To address this problem, in this paper we propose a new automatic classification system for adversarial examples. Our proposed system can almost distinguish adversarial samples and legitimate images in an end-to-end manner without human intervention. We exploit the important role of low frequencies in adversarial samples and proposing the label-based method for detecting malicious samples based on our observation. We evaluate our method on a variety of standard benchmark datasets including MNIST and ImageNet. Our method reached out detection rates of more than 96% in many settings. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Along with significant improvements in deep neural networks, image classification tasks are solved with extremely high accuracy rates. However, deep neural networks have been recently found vulnerable to well-designed input samples that called adversarial examples. Such an issue causes deep neural networks to misclassify adversarial examples that are imperceptible to humans. Distinguishing adversarial images and legitimate images are tough challenges. To address this problem, in this paper we propose a new automatic classification system for adversarial examples. Our proposed system can almost distinguish adversarial samples and legitimate images in an end-to-end manner without human intervention. We exploit the important role of low frequencies in adversarial samples and proposing the label-based method for detecting malicious samples based on our observation. We evaluate our method on a variety of standard benchmark datasets including MNIST and ImageNet. Our method reached out detection rates of more than 96% in many settings. |
書誌レコードID |
|
|
|
識別子タイプ |
NCID |
|
|
関連識別子 |
ISSN 1882-0840 |
書誌情報 |
コンピュータセキュリティシンポジウム2019論文集
巻 2019,
p. 1356-1363,
発行日 2019-10-14
|
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |