Item type |
Symposium(1) |
公開日 |
2024-08-21 |
タイトル |
|
|
タイトル |
Squeezing 8-bit Multiplier Energy with Input Segmentation in DNN Inference Accelerators |
タイトル |
|
|
言語 |
en |
|
タイトル |
Squeezing 8-bit Multiplier Energy with Input Segmentation in DNN Inference Accelerators |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
近似計算 |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_5794 |
|
資源タイプ |
conference paper |
著者所属 |
|
|
|
Kyoto University |
著者所属 |
|
|
|
Kyoto University |
著者所属 |
|
|
|
Kyoto University |
著者所属 |
|
|
|
SUSTech |
著者所属 |
|
|
|
Kyoto University |
著者所属(英) |
|
|
|
en |
|
|
Kyoto University |
著者所属(英) |
|
|
|
en |
|
|
Kyoto University |
著者所属(英) |
|
|
|
en |
|
|
Kyoto University |
著者所属(英) |
|
|
|
en |
|
|
SUSTech |
著者所属(英) |
|
|
|
en |
|
|
Kyoto University |
著者名 |
Mingtao, Zhang
Quan, Cheng
Hiromitsu, Awano
Longyang, Lin
Masanori, Hashimoto
|
著者名(英) |
Mingtao, Zhang
Quan, Cheng
Hiromitsu, Awano
Longyang, Lin
Masanori, Hashimoto
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Approximate computing is a strategic method to reduce design complexity and energy consumption, especially in error-resilient applications. Multipliers, essential arithmetic units, are crucial in areas like deep neural networks (DNNs). However, current extensively researched 8-bit approximate multipliers often fail to maintain high accuracy across various DNN applications. This paper addresses the key limitations of multipliers with simplified structures or approximated logic and highlights the advantages of an alternative design approach using approximation at the input level. Additionally, we explore an input static segmentation strategy and propose simplified versions of approximate multipliers, aiming to offer more energy-efficient options for DNN accelerators. Experimental results show that existing 8-bit approximate multipliers cannot match the performance of commercial IP multipliers. In contrast, the proposed static segmented multipliers bridge the performance gap between IP multipliers of different quantization bit-widths, providing an improved energy-accuracy trade-off for DNN inference accelerators. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Approximate computing is a strategic method to reduce design complexity and energy consumption, especially in error-resilient applications. Multipliers, essential arithmetic units, are crucial in areas like deep neural networks (DNNs). However, current extensively researched 8-bit approximate multipliers often fail to maintain high accuracy across various DNN applications. This paper addresses the key limitations of multipliers with simplified structures or approximated logic and highlights the advantages of an alternative design approach using approximation at the input level. Additionally, we explore an input static segmentation strategy and propose simplified versions of approximate multipliers, aiming to offer more energy-efficient options for DNN accelerators. Experimental results show that existing 8-bit approximate multipliers cannot match the performance of commercial IP multipliers. In contrast, the proposed static segmented multipliers bridge the performance gap between IP multipliers of different quantization bit-widths, providing an improved energy-accuracy trade-off for DNN inference accelerators. |
書誌情報 |
DAシンポジウム2024論文集
巻 2024,
p. 49-56,
発行日 2024-08-21
|
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |