| Item type |
SIG Technical Reports(1) |
| 公開日 |
2024-05-01 |
| タイトル |
|
|
タイトル |
An optimization pass for training speed-up and strategy search in 3D parallelism (Unrefereed) |
| タイトル |
|
|
言語 |
en |
|
タイトル |
An optimization pass for training speed-up and strategy search in 3D parallelism (Unrefereed) |
| 言語 |
|
|
言語 |
eng |
| キーワード |
|
|
主題Scheme |
Other |
|
主題 |
最適化 |
| 資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
| 著者所属 |
|
|
|
Tokyo Institute of Technology |
| 著者所属 |
|
|
|
RIKEN Center for Computational Science |
| 著者所属 |
|
|
|
Tokyo Institute of Technology |
| 著者所属 |
|
|
|
CEA |
| 著者所属 |
|
|
|
CEA |
| 著者所属(英) |
|
|
|
en |
|
|
Tokyo Institute of Technology |
| 著者所属(英) |
|
|
|
en |
|
|
RIKEN Center for Computational Science |
| 著者所属(英) |
|
|
|
en |
|
|
Tokyo Institute of Technology |
| 著者所属(英) |
|
|
|
en |
|
|
CEA |
| 著者所属(英) |
|
|
|
en |
|
|
CEA |
| 著者名 |
Ryubu, Hosoki
Kento, Sato
Toshio, Endo
Julien, Bigot
Edouard, Audit
|
| 著者名(英) |
Ryubu, Hosoki
Kento, Sato
Toshio, Endo
Julien, Bigot
Edouard, Audit
|
| 論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Deep learning has achieved significant progress in recent years by scaling models. However, training large models requires enormous memory capacity and time, so distributed learning is essential. 3D parallelism, combining data parallelism, pipeline parallelism, and tensor parallelism, has attracted attention as a distributed method, but determining the combination of each parallelism is nontrivial and requires expertise. To achieve more efficient automatic 3D parallelization, we analyzed the existing 3D parallelism library, Alpa. We found that in Alpa, unnecessary communication waits occur and certain communication costs are not taken into account in determining the parallel strategy. We implemented an optimization pass that improves the timing of communication calls to reduce unnecessary communication waits. Also, our optimization pass allows us to obtain an accurate profile, thus enabling us to determine a more optimal parallel strategy. From the experiments, we found that the running with our optimization was 11.5% faster on GPT2-XL compared to the original Alpa. |
| 論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Deep learning has achieved significant progress in recent years by scaling models. However, training large models requires enormous memory capacity and time, so distributed learning is essential. 3D parallelism, combining data parallelism, pipeline parallelism, and tensor parallelism, has attracted attention as a distributed method, but determining the combination of each parallelism is nontrivial and requires expertise. To achieve more efficient automatic 3D parallelization, we analyzed the existing 3D parallelism library, Alpa. We found that in Alpa, unnecessary communication waits occur and certain communication costs are not taken into account in determining the parallel strategy. We implemented an optimization pass that improves the timing of communication calls to reduce unnecessary communication waits. Also, our optimization pass allows us to obtain an accurate profile, thus enabling us to determine a more optimal parallel strategy. From the experiments, we found that the running with our optimization was 11.5% faster on GPT2-XL compared to the original Alpa. |
| 書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10463942 |
| 書誌情報 |
研究報告ハイパフォーマンスコンピューティング(HPC)
巻 2024-HPC-194,
号 7,
p. 1-8,
発行日 2024-05-01
|
| ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8841 |
| Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
| 出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |