ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. シンポジウム
  2. シンポジウムシリーズ
  3. DAシンポジウム
  4. 2024

Efficient Quantization Methods Against Adversarial Attacks on FPGA

https://ipsj.ixsq.nii.ac.jp/records/238261
https://ipsj.ixsq.nii.ac.jp/records/238261
01c79cb2-0c31-4f98-aec3-d73c97427bf2
名前 / ファイル ライセンス アクション
IPSJ-DAS2024037.pdf IPSJ-DAS2024037.pdf (3.8 MB)
 2026年8月21日からダウンロード可能です。
Copyright (c) 2024 by the Information Processing Society of Japan
非会員:¥660, IPSJ:学会員:¥330, SLDM:会員:¥0, DLIB:会員:¥0
Item type Symposium(1)
公開日 2024-08-21
タイトル
タイトル Efficient Quantization Methods Against Adversarial Attacks on FPGA
タイトル
言語 en
タイトル Efficient Quantization Methods Against Adversarial Attacks on FPGA
言語
言語 eng
キーワード
主題Scheme Other
主題 セキュリティ
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_5794
資源タイプ conference paper
著者所属
Waseda University
著者所属
Yokohama National University
著者所属
Waseda University
著者所属(英)
en
Waseda University
著者所属(英)
en
Yokohama National University
著者所属(英)
en
Waseda University
著者名 Silu, Liu

× Silu, Liu

Silu, Liu

Search repository
Heming, Sun

× Heming, Sun

Heming, Sun

Search repository
Shinji, Kimura

× Shinji, Kimura

Shinji, Kimura

Search repository
著者名(英) Silu, Liu

× Silu, Liu

en Silu, Liu

Search repository
Heming, Sun

× Heming, Sun

en Heming, Sun

Search repository
Shinji, Kimura

× Shinji, Kimura

en Shinji, Kimura

Search repository
論文抄録
内容記述タイプ Other
内容記述 Enhancing the security of neural networks is important since white-box attacks can fetch the parameters of neural networks and generate the adversarial examples to mislead the neural network outputs. As a simple yet efficient defense method, adversarial training (AT) can improve the robustness of neural networks against adversarial attacks. However, most AT methods are based on floating-point arithmetic, and thus they are not friendly to hardware accelerators with integer quantization on FPGA. In this work, the hybrid quantization is incorporated into AT methods by two ways. 1) AT is performed with the post-training quantization (PTQ) to calibrate the activation range. 2) AT is performed first, quantization-aware training (QAT) with clean and adversarial examples are performed second, and then the PTQ is applied. The results illustrate that the accuracy can be increased by 10.6% compared with previous work on the image classification task. Besides, a quantized CNN model made by the proposed method is suitable to FPGA and its FPGA accelerator reaches about 7.3× higher throughput per power compared with GPU. A real-time vehicle detection on FPGA demonstrates the practicality and effectiveness of the proposed method visually.
論文抄録(英)
内容記述タイプ Other
内容記述 Enhancing the security of neural networks is important since white-box attacks can fetch the parameters of neural networks and generate the adversarial examples to mislead the neural network outputs. As a simple yet efficient defense method, adversarial training (AT) can improve the robustness of neural networks against adversarial attacks. However, most AT methods are based on floating-point arithmetic, and thus they are not friendly to hardware accelerators with integer quantization on FPGA. In this work, the hybrid quantization is incorporated into AT methods by two ways. 1) AT is performed with the post-training quantization (PTQ) to calibrate the activation range. 2) AT is performed first, quantization-aware training (QAT) with clean and adversarial examples are performed second, and then the PTQ is applied. The results illustrate that the accuracy can be increased by 10.6% compared with previous work on the image classification task. Besides, a quantized CNN model made by the proposed method is suitable to FPGA and its FPGA accelerator reaches about 7.3× higher throughput per power compared with GPU. A real-time vehicle detection on FPGA demonstrates the practicality and effectiveness of the proposed method visually.
書誌情報 DAシンポジウム2024論文集

巻 2024, p. 236-242, 発行日 2024-08-21
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 08:37:15.529699
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3