| Item type |
SIG Technical Reports(1) |
| 公開日 |
2022-10-04 |
| タイトル |
|
|
タイトル |
Implementation of Edge-Cloud Cooperative CNN Inference on an IoT Platform and Its Performance Analysis |
| タイトル |
|
|
言語 |
en |
|
タイトル |
Implementation of Edge-Cloud Cooperative CNN Inference on an IoT Platform and Its Performance Analysis |
| 言語 |
|
|
言語 |
eng |
| キーワード |
|
|
主題Scheme |
Other |
|
主題 |
IoT・エッジAI |
| 資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
| 著者所属 |
|
|
|
Kyushu University |
| 著者所属 |
|
|
|
Kyushu University |
| 著者所属 |
|
|
|
Kyushu University |
| 著者所属(英) |
|
|
|
en |
|
|
Kyushu University |
| 著者所属(英) |
|
|
|
en |
|
|
Kyushu University |
| 著者所属(英) |
|
|
|
en |
|
|
Kyushu University |
| 著者名 |
Yuan, Wang
Hidetomo, Shibamura
Koji, Inoue
|
| 著者名(英) |
Yuan, Wang
Hidetomo, Shibamura
Koji, Inoue
|
| 論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Three mainstream computing paradigms can be considered to meet the demands of real-time and low-energy AI executions in the IoT era: edge computing, cloud computing, and edge-cloud cooperative computing. For significant energy restriction on edge, offloading some workload to the cloud is a promising solution, but it tends to generate extra overhead in terms of overall execution latency. This paper focuses on the performance overhead (computing latency) evaluation of edge-cloud cooperative CNN inference towards high-performance edge-cloud AI executions on an IoT platform. We design and implement an edge-cloud cooperative CNN inference framework by targeting TensorFlow Lite. Through verification, we have confirmed the feasibility and accuracy of the provided implementation. For platform optimizations, we identify the current bottlenecks of the target platform and its lack of support for edge-cloud CNN inference. This paper also discusses suggestions for enhancing the targeted IoT platform. |
| 論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Three mainstream computing paradigms can be considered to meet the demands of real-time and low-energy AI executions in the IoT era: edge computing, cloud computing, and edge-cloud cooperative computing. For significant energy restriction on edge, offloading some workload to the cloud is a promising solution, but it tends to generate extra overhead in terms of overall execution latency. This paper focuses on the performance overhead (computing latency) evaluation of edge-cloud cooperative CNN inference towards high-performance edge-cloud AI executions on an IoT platform. We design and implement an edge-cloud cooperative CNN inference framework by targeting TensorFlow Lite. Through verification, we have confirmed the feasibility and accuracy of the provided implementation. For platform optimizations, we identify the current bottlenecks of the target platform and its lack of support for edge-cloud CNN inference. This paper also discusses suggestions for enhancing the targeted IoT platform. |
| 書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10096105 |
| 書誌情報 |
研究報告システム・アーキテクチャ(ARC)
巻 2022-ARC-250,
号 10,
p. 1-7,
発行日 2022-10-04
|
| ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8574 |
| Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
| 出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |