| Item type |
Symposium(1) |
| 公開日 |
2024-08-21 |
| タイトル |
|
|
タイトル |
Efficient Quantization Methods Against Adversarial Attacks on FPGA |
| タイトル |
|
|
言語 |
en |
|
タイトル |
Efficient Quantization Methods Against Adversarial Attacks on FPGA |
| 言語 |
|
|
言語 |
eng |
| キーワード |
|
|
主題Scheme |
Other |
|
主題 |
セキュリティ |
| 資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_5794 |
|
資源タイプ |
conference paper |
| 著者所属 |
|
|
|
Waseda University |
| 著者所属 |
|
|
|
Yokohama National University |
| 著者所属 |
|
|
|
Waseda University |
| 著者所属(英) |
|
|
|
en |
|
|
Waseda University |
| 著者所属(英) |
|
|
|
en |
|
|
Yokohama National University |
| 著者所属(英) |
|
|
|
en |
|
|
Waseda University |
| 著者名 |
Silu, Liu
Heming, Sun
Shinji, Kimura
|
| 著者名(英) |
Silu, Liu
Heming, Sun
Shinji, Kimura
|
| 論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Enhancing the security of neural networks is important since white-box attacks can fetch the parameters of neural networks and generate the adversarial examples to mislead the neural network outputs. As a simple yet efficient defense method, adversarial training (AT) can improve the robustness of neural networks against adversarial attacks. However, most AT methods are based on floating-point arithmetic, and thus they are not friendly to hardware accelerators with integer quantization on FPGA. In this work, the hybrid quantization is incorporated into AT methods by two ways. 1) AT is performed with the post-training quantization (PTQ) to calibrate the activation range. 2) AT is performed first, quantization-aware training (QAT) with clean and adversarial examples are performed second, and then the PTQ is applied. The results illustrate that the accuracy can be increased by 10.6% compared with previous work on the image classification task. Besides, a quantized CNN model made by the proposed method is suitable to FPGA and its FPGA accelerator reaches about 7.3× higher throughput per power compared with GPU. A real-time vehicle detection on FPGA demonstrates the practicality and effectiveness of the proposed method visually. |
| 論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Enhancing the security of neural networks is important since white-box attacks can fetch the parameters of neural networks and generate the adversarial examples to mislead the neural network outputs. As a simple yet efficient defense method, adversarial training (AT) can improve the robustness of neural networks against adversarial attacks. However, most AT methods are based on floating-point arithmetic, and thus they are not friendly to hardware accelerators with integer quantization on FPGA. In this work, the hybrid quantization is incorporated into AT methods by two ways. 1) AT is performed with the post-training quantization (PTQ) to calibrate the activation range. 2) AT is performed first, quantization-aware training (QAT) with clean and adversarial examples are performed second, and then the PTQ is applied. The results illustrate that the accuracy can be increased by 10.6% compared with previous work on the image classification task. Besides, a quantized CNN model made by the proposed method is suitable to FPGA and its FPGA accelerator reaches about 7.3× higher throughput per power compared with GPU. A real-time vehicle detection on FPGA demonstrates the practicality and effectiveness of the proposed method visually. |
| 書誌情報 |
DAシンポジウム2024論文集
巻 2024,
p. 236-242,
発行日 2024-08-21
|
| 出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |