Item type |
Symposium(1) |
公開日 |
2021-06-23 |
タイトル |
|
|
タイトル |
Improving The Decision-Based Adversarial Boundary Attack by Square Masked Movement |
タイトル |
|
|
言語 |
en |
|
タイトル |
Improving The Decision-Based Adversarial Boundary Attack by Square Masked Movement |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
AI |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_5794 |
|
資源タイプ |
conference paper |
著者所属 |
|
|
|
東京大学 |
著者所属 |
|
|
|
東京大学 |
著者所属 |
|
|
|
東京大学 |
著者所属 |
|
|
|
東京大学 |
著者名 |
Van, Sang Tran
Phuong, Thao Tran
山口, 利恵
中田, 登志之
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Adversarial image attack is a well-known attack methodology in the image recognition field where the input images are purposely modified to make no difference to the human perception but can fool the image recognition models to classify them incorrectly. Recently, the adversarial attack has drawn much attention from researchers due to its ability to fool even state-of-the-art and commercial image recognition models. Researching the adversarial attack is crucial to know the potential risk, thus preparing needed earlier prevention. In this paper, we investigated an improvement on the Boundary Attack algorithm because of its effectiveness, flexibility and the absent of a direct protection mechanism. Previously, in the randomization step, the Boundary Attack algorithm randomizes the movement vector from the whole image space. In this research, we have improved the algorithm by applying a square mask to the space in this step. We have applied on the CIFAR10 dataset and successfully improved the distance between the adversarial and the original images without increasing the number of queries. Our work suggests a new possibility of an attack vector that can exploit the prior knowledge of the model to improve the distance without affecting the query count. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Adversarial image attack is a well-known attack methodology in the image recognition field where the input images are purposely modified to make no difference to the human perception but can fool the image recognition models to classify them incorrectly. Recently, the adversarial attack has drawn much attention from researchers due to its ability to fool even state-of-the-art and commercial image recognition models. Researching the adversarial attack is crucial to know the potential risk, thus preparing needed earlier prevention. In this paper, we investigated an improvement on the Boundary Attack algorithm because of its effectiveness, flexibility and the absent of a direct protection mechanism. Previously, in the randomization step, the Boundary Attack algorithm randomizes the movement vector from the whole image space. In this research, we have improved the algorithm by applying a square mask to the space in this step. We have applied on the CIFAR10 dataset and successfully improved the distance between the adversarial and the original images without increasing the number of queries. Our work suggests a new possibility of an attack vector that can exploit the prior knowledge of the model to improve the distance without affecting the query count. |
書誌情報 |
マルチメディア,分散協調とモバイルシンポジウム2021論文集
巻 2021,
号 1,
p. 466-471,
発行日 2021-06-23
|
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |