ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. ハイパフォーマンスコンピューティング(HPC)
  3. 2024
  4. 2024-HPC-195

High-performance Graph Convolutional Networks Training on Fugaku and ABCI Supercomputers

https://ipsj.ixsq.nii.ac.jp/records/237575
https://ipsj.ixsq.nii.ac.jp/records/237575
29bc4657-72af-44aa-b006-842408a1b5ea
名前 / ファイル ライセンス アクション
IPSJ-HPC24195014.pdf IPSJ-HPC24195014.pdf (692.6 kB)
 2026年8月1日からダウンロード可能です。
Copyright (c) 2024 by the Information Processing Society of Japan
非会員:¥660, IPSJ:学会員:¥330, HPC:会員:¥0, DLIB:会員:¥0
Item type SIG Technical Reports(1)
公開日 2024-08-01
タイトル
タイトル High-performance Graph Convolutional Networks Training on Fugaku and ABCI Supercomputers
タイトル
言語 en
タイトル High-performance Graph Convolutional Networks Training on Fugaku and ABCI Supercomputers
言語
言語 eng
キーワード
主題Scheme Other
主題 深層学習
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Tokyo Institute of Technology/RIKEN Center for Computational Science
著者所属
National Institute of Advanced Industrial Science and Technology (AIST)/RIKEN Center for Computational Science
著者所属
National Institute of Advanced Industrial Science and Technology (AIST)
著者所属
Tokyo Institute of Technology
著者所属
Tokyo Institute of Technology
著者所属
Tokyo Institute of Technology/RIKEN Center for Computational Science
著者所属
RIKEN Center for Computational Science
著者所属(英)
en
Tokyo Institute of Technology / RIKEN Center for Computational Science
著者所属(英)
en
National Institute of Advanced Industrial Science and Technology (AIST) / RIKEN Center for Computational Science
著者所属(英)
en
National Institute of Advanced Industrial Science and Technology (AIST)
著者所属(英)
en
Tokyo Institute of Technology
著者所属(英)
en
Tokyo Institute of Technology
著者所属(英)
en
Tokyo Institute of Technology / RIKEN Center for Computational Science
著者所属(英)
en
RIKEN Center for Computational Science
著者名 Chen, Zhuang

× Chen, Zhuang

Chen, Zhuang

Search repository
Peng, Chen

× Peng, Chen

Peng, Chen

Search repository
Xin, Liu

× Xin, Liu

Xin, Liu

Search repository
Rio, Yokota

× Rio, Yokota

Rio, Yokota

Search repository
Toshio, Endo

× Toshio, Endo

Toshio, Endo

Search repository
Satoshi, Matsuoka

× Satoshi, Matsuoka

Satoshi, Matsuoka

Search repository
Mohamed, Wahib

× Mohamed, Wahib

Mohamed, Wahib

Search repository
著者名(英) Chen, Zhuang

× Chen, Zhuang

en Chen, Zhuang

Search repository
Peng, Chen

× Peng, Chen

en Peng, Chen

Search repository
Xin, Liu

× Xin, Liu

en Xin, Liu

Search repository
Rio, Yokota

× Rio, Yokota

en Rio, Yokota

Search repository
Toshio, Endo

× Toshio, Endo

en Toshio, Endo

Search repository
Satoshi, Matsuoka

× Satoshi, Matsuoka

en Satoshi, Matsuoka

Search repository
Mohamed, Wahib

× Mohamed, Wahib

en Mohamed, Wahib

Search repository
論文抄録
内容記述タイプ Other
内容記述 Graph Convolutional Networks (GCNs) are widely used in various domains. However, training distributed full-batch GCNs on large-scale graphs poses challenges due to high communication overhead. This paper presents a hybrid pre-post-aggregation approach to reduce communication volume. Additionally, we employ an integer quantization method to compress the communication data, thus reducing communication costs further. Combining these techniques, we develop an efficient and scalable distributed GCN training framework, SuperGNN, for CPU-powered supercomputers, Fugaku and ABCI. Experimental results on multiple large graph datasets show that our method achieves a speedup of up to 6× compared with the state-of-the-art implementations, and scales to 1000s of HPC-grade CPUs, without sacrificing model convergence and accuracy. Our framework achieves performance on CPU-powered supercomputers comparable to that of GPU-powered supercomputers, with a fraction of the cost and power budget.
論文抄録(英)
内容記述タイプ Other
内容記述 Graph Convolutional Networks (GCNs) are widely used in various domains. However, training distributed full-batch GCNs on large-scale graphs poses challenges due to high communication overhead. This paper presents a hybrid pre-post-aggregation approach to reduce communication volume. Additionally, we employ an integer quantization method to compress the communication data, thus reducing communication costs further. Combining these techniques, we develop an efficient and scalable distributed GCN training framework, SuperGNN, for CPU-powered supercomputers, Fugaku and ABCI. Experimental results on multiple large graph datasets show that our method achieves a speedup of up to 6× compared with the state-of-the-art implementations, and scales to 1000s of HPC-grade CPUs, without sacrificing model convergence and accuracy. Our framework achieves performance on CPU-powered supercomputers comparable to that of GPU-powered supercomputers, with a fraction of the cost and power budget.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN10463942
書誌情報 研究報告ハイパフォーマンスコンピューティング(HPC)

巻 2024-HPC-195, 号 14, p. 1-8, 発行日 2024-08-01
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8841
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 08:50:21.982612
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3