ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング


インデックスリンク

インデックスツリー

  • RootNode

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. ハイパフォーマンスコンピューティング(HPC)
  3. 2023
  4. 2023-HPC-190

Scalable Training of Graph Convolutional Networks on Supercomputers

https://ipsj.ixsq.nii.ac.jp/records/227130
https://ipsj.ixsq.nii.ac.jp/records/227130
398c3be9-ab36-4519-8ccf-a92f69c13ca6
名前 / ファイル ライセンス アクション
IPSJ-HPC23190019.pdf IPSJ-HPC23190019.pdf (863.5 kB)
 2025年7月27日からダウンロード可能です。
Copyright (c) 2023 by the Information Processing Society of Japan
非会員:¥660, IPSJ:学会員:¥330, HPC:会員:¥0, DLIB:会員:¥0
Item type SIG Technical Reports(1)
公開日 2023-07-27
タイトル
タイトル Scalable Training of Graph Convolutional Networks on Supercomputers
タイトル
言語 en
タイトル Scalable Training of Graph Convolutional Networks on Supercomputers
言語
言語 eng
キーワード
主題Scheme Other
主題 機械学習
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Tokyo Institute of Technology/RIKEN Center for Computational Science
著者所属
National Institute of Advanced Industrial Science and Technology (AIST)/RIKEN Center for Computational Science
著者所属
National Institute of Advanced Industrial Science and Technology (AIST)
著者所属
RIKEN Center for Computational Science
著者所属
Tokyo Institute of Technology
著者所属
RIKEN Center for Computational Science
著者所属(英)
en
Tokyo Institute of Technology / RIKEN Center for Computational Science
著者所属(英)
en
National Institute of Advanced Industrial Science and Technology (AIST) / RIKEN Center for Computational Science
著者所属(英)
en
National Institute of Advanced Industrial Science and Technology (AIST)
著者所属(英)
en
RIKEN Center for Computational Science
著者所属(英)
en
Tokyo Institute of Technology
著者所属(英)
en
RIKEN Center for Computational Science
著者名 Chen, Zhuang

× Chen, Zhuang

Chen, Zhuang

Search repository
Peng, Chen

× Peng, Chen

Peng, Chen

Search repository
Xin, Liu

× Xin, Liu

Xin, Liu

Search repository
Satoshi, Matsuoka

× Satoshi, Matsuoka

Satoshi, Matsuoka

Search repository
Toshio, Endo

× Toshio, Endo

Toshio, Endo

Search repository
Mohamed, Wahib

× Mohamed, Wahib

Mohamed, Wahib

Search repository
著者名(英) Chen, Zhuang

× Chen, Zhuang

en Chen, Zhuang

Search repository
Peng, Chen

× Peng, Chen

en Peng, Chen

Search repository
Xin, Liu

× Xin, Liu

en Xin, Liu

Search repository
Satoshi, Matsuoka

× Satoshi, Matsuoka

en Satoshi, Matsuoka

Search repository
Toshio, Endo

× Toshio, Endo

en Toshio, Endo

Search repository
Mohamed, Wahib

× Mohamed, Wahib

en Mohamed, Wahib

Search repository
論文抄録
内容記述タイプ Other
内容記述 Graph Convolutional Networks (GCNs) are widely used across diverse domains. However, training distributed full-batch GCNs on graphs presents challenges due to inefficient memory access patterns and the high communication overhead caused by the graph's irregular structures. In this paper, we propose efficient aggregation operators designed for irregular memory access patterns. Additionally, we employ a pre- and delayed-aggregation approach and leverage half-precision communication to reduce communication costs. By combining these techniques, we have developed an efficient and scalable GCN training framework specifically designed for distributed systems. Experimental results on several graph datasets demonstrate that our proposed method achieves a remarkable speedup of up to 4.75x compared to the state-of-the-art method on the ABCI supercomputer.
論文抄録(英)
内容記述タイプ Other
内容記述 Graph Convolutional Networks (GCNs) are widely used across diverse domains. However, training distributed full-batch GCNs on graphs presents challenges due to inefficient memory access patterns and the high communication overhead caused by the graph's irregular structures. In this paper, we propose efficient aggregation operators designed for irregular memory access patterns. Additionally, we employ a pre- and delayed-aggregation approach and leverage half-precision communication to reduce communication costs. By combining these techniques, we have developed an efficient and scalable GCN training framework specifically designed for distributed systems. Experimental results on several graph datasets demonstrate that our proposed method achieves a remarkable speedup of up to 4.75x compared to the state-of-the-art method on the ABCI supercomputer.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN10463942
書誌情報 研究報告ハイパフォーマンスコンピューティング(HPC)

巻 2023-HPC-190, 号 19, p. 1-10, 発行日 2023-07-27
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8841
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 12:16:01.680553
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

Chen, Zhuang, Peng, Chen, Xin, Liu, Satoshi, Matsuoka, Toshio, Endo, Mohamed, Wahib, 2023: 情報処理学会, 1–10 p.

Loading...

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3