Item type |
SIG Technical Reports(1) |
公開日 |
2023-07-27 |
タイトル |
|
|
タイトル |
Scalable Training of Graph Convolutional Networks on Supercomputers |
タイトル |
|
|
言語 |
en |
|
タイトル |
Scalable Training of Graph Convolutional Networks on Supercomputers |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
機械学習 |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
Tokyo Institute of Technology/RIKEN Center for Computational Science |
著者所属 |
|
|
|
National Institute of Advanced Industrial Science and Technology (AIST)/RIKEN Center for Computational Science |
著者所属 |
|
|
|
National Institute of Advanced Industrial Science and Technology (AIST) |
著者所属 |
|
|
|
RIKEN Center for Computational Science |
著者所属 |
|
|
|
Tokyo Institute of Technology |
著者所属 |
|
|
|
RIKEN Center for Computational Science |
著者所属(英) |
|
|
|
en |
|
|
Tokyo Institute of Technology / RIKEN Center for Computational Science |
著者所属(英) |
|
|
|
en |
|
|
National Institute of Advanced Industrial Science and Technology (AIST) / RIKEN Center for Computational Science |
著者所属(英) |
|
|
|
en |
|
|
National Institute of Advanced Industrial Science and Technology (AIST) |
著者所属(英) |
|
|
|
en |
|
|
RIKEN Center for Computational Science |
著者所属(英) |
|
|
|
en |
|
|
Tokyo Institute of Technology |
著者所属(英) |
|
|
|
en |
|
|
RIKEN Center for Computational Science |
著者名 |
Chen, Zhuang
Peng, Chen
Xin, Liu
Satoshi, Matsuoka
Toshio, Endo
Mohamed, Wahib
|
著者名(英) |
Chen, Zhuang
Peng, Chen
Xin, Liu
Satoshi, Matsuoka
Toshio, Endo
Mohamed, Wahib
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Graph Convolutional Networks (GCNs) are widely used across diverse domains. However, training distributed full-batch GCNs on graphs presents challenges due to inefficient memory access patterns and the high communication overhead caused by the graph's irregular structures. In this paper, we propose efficient aggregation operators designed for irregular memory access patterns. Additionally, we employ a pre- and delayed-aggregation approach and leverage half-precision communication to reduce communication costs. By combining these techniques, we have developed an efficient and scalable GCN training framework specifically designed for distributed systems. Experimental results on several graph datasets demonstrate that our proposed method achieves a remarkable speedup of up to 4.75x compared to the state-of-the-art method on the ABCI supercomputer. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Graph Convolutional Networks (GCNs) are widely used across diverse domains. However, training distributed full-batch GCNs on graphs presents challenges due to inefficient memory access patterns and the high communication overhead caused by the graph's irregular structures. In this paper, we propose efficient aggregation operators designed for irregular memory access patterns. Additionally, we employ a pre- and delayed-aggregation approach and leverage half-precision communication to reduce communication costs. By combining these techniques, we have developed an efficient and scalable GCN training framework specifically designed for distributed systems. Experimental results on several graph datasets demonstrate that our proposed method achieves a remarkable speedup of up to 4.75x compared to the state-of-the-art method on the ABCI supercomputer. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10463942 |
書誌情報 |
研究報告ハイパフォーマンスコンピューティング(HPC)
巻 2023-HPC-190,
号 19,
p. 1-10,
発行日 2023-07-27
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8841 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |