| Item type |
Trans(1) |
| 公開日 |
2023-06-29 |
| タイトル |
|
|
タイトル |
AI Accelerator Support in Onnx-mlir Deep Learning Compiler |
| タイトル |
|
|
言語 |
en |
|
タイトル |
AI Accelerator Support in Onnx-mlir Deep Learning Compiler |
| 言語 |
|
|
言語 |
eng |
| キーワード |
|
|
主題Scheme |
Other |
|
主題 |
[発表概要, Unrefereed Presentatin Abstract] |
| 資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_6501 |
|
資源タイプ |
journal article |
| 著者所属 |
|
|
|
IBM Research - Tokyo |
| 著者所属 |
|
|
|
IBM T.J. Watson Research Center |
| 著者所属 |
|
|
|
IBM T.J. Watson Research Center |
| 著者所属 |
|
|
|
IBM Research - Tokyo |
| 著者所属 |
|
|
|
IBM Research - Tokyo |
| 著者所属 |
|
|
|
IBM Research - Tokyo |
| 著者所属 |
|
|
|
IBM T.J. Watson Research Center |
| 著者所属 |
|
|
|
IBM T.J. Watson Research Center |
| 著者所属(英) |
|
|
|
en |
|
|
IBM Research - Tokyo |
| 著者所属(英) |
|
|
|
en |
|
|
IBM T.J. Watson Research Center |
| 著者所属(英) |
|
|
|
en |
|
|
IBM T.J. Watson Research Center |
| 著者所属(英) |
|
|
|
en |
|
|
IBM Research - Tokyo |
| 著者所属(英) |
|
|
|
en |
|
|
IBM Research - Tokyo |
| 著者所属(英) |
|
|
|
en |
|
|
IBM Research - Tokyo |
| 著者所属(英) |
|
|
|
en |
|
|
IBM T.J. Watson Research Center |
| 著者所属(英) |
|
|
|
en |
|
|
IBM T.J. Watson Research Center |
| 著者名 |
Tung, D. Le
Tong, Chen
Alexandre, E. Eichenberger
Haruki, Imai
Kiyokuni, Kawachiya
Yasushi, Negishi
Kevin, O'Brien
Gong, Su
|
| 著者名(英) |
Tung, D. Le
Tong, Chen
Alexandre, E. Eichenberger
Haruki, Imai
Kiyokuni, Kawachiya
Yasushi, Negishi
Kevin, O'Brien
Gong, Su
|
| 論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Onnx-mlir is an open-source compiler to compile artifical intelligence (AI) models in the Open Neural Network Exchange (ONNX) format into native code on different architectures such as x86, Power, and Z processors. It was built upon the Multi-Level Intermediate Representation (MLIR) infrastructure in the LLVM project and relies on the MLIR concept of dialects to implement its functionality. In this paper, we present our work of extending onnx-mlir to generate and optimize code for the IBM Telum on-chip AI accelerator (zAIU) introduced in the IBM z16 mainframe. Specifically, we propose here two dialects: (1) zhigh dialect to represent high-level functions on zAIU, and (2) zlow dialect to represent low-level computation on zAIU. Each dialect facilitates its own characteristic set of graph-level and memory-level optimizations, respectively. We explain our extension of onnx-mlir by following several models through the proposed dialects and we include some early optimization work and performance results. |
| 論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Onnx-mlir is an open-source compiler to compile artifical intelligence (AI) models in the Open Neural Network Exchange (ONNX) format into native code on different architectures such as x86, Power, and Z processors. It was built upon the Multi-Level Intermediate Representation (MLIR) infrastructure in the LLVM project and relies on the MLIR concept of dialects to implement its functionality. In this paper, we present our work of extending onnx-mlir to generate and optimize code for the IBM Telum on-chip AI accelerator (zAIU) introduced in the IBM z16 mainframe. Specifically, we propose here two dialects: (1) zhigh dialect to represent high-level functions on zAIU, and (2) zlow dialect to represent low-level computation on zAIU. Each dialect facilitates its own characteristic set of graph-level and memory-level optimizations, respectively. We explain our extension of onnx-mlir by following several models through the proposed dialects and we include some early optimization work and performance results. |
| 書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA11464814 |
| 書誌情報 |
情報処理学会論文誌プログラミング(PRO)
巻 16,
号 2,
p. 33-33,
発行日 2023-06-29
|
| ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
1882-7802 |
| 出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |