ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. ハイパフォーマンスコンピューティング(HPC)
  3. 2023
  4. 2023-HPC-190

Comparison of Parallel STL with C/C++ GPU Programming Models

https://ipsj.ixsq.nii.ac.jp/records/227113
https://ipsj.ixsq.nii.ac.jp/records/227113
b2962f32-fd3a-4244-930d-f1525c84adc0
名前 / ファイル ライセンス アクション
IPSJ-HPC23190002.pdf IPSJ-HPC23190002.pdf (934.0 kB)
 2025年7月27日からダウンロード可能です。
Copyright (c) 2023 by the Information Processing Society of Japan
非会員:¥660, IPSJ:学会員:¥330, HPC:会員:¥0, DLIB:会員:¥0
Item type SIG Technical Reports(1)
公開日 2023-07-27
タイトル
タイトル Comparison of Parallel STL with C/C++ GPU Programming Models
タイトル
言語 en
タイトル Comparison of Parallel STL with C/C++ GPU Programming Models
言語
言語 eng
キーワード
主題Scheme Other
主題 アクセラレータ
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Department of Mechanical and Aerospace Engineering, School of Engineering, Tohoku University
著者所属
Cyberscience Center, Tohoku University/Graduate School of Information Sciences, Tohoku University
著者所属
Cyberscience Center, Tohoku University
著者所属
Cyberscience Center, Tohoku University/Graduate School of Information Sciences, Tohoku University
著者所属(英)
en
Department of Mechanical and Aerospace Engineering, School of Engineering, Tohoku University
著者所属(英)
en
Cyberscience Center, Tohoku University / Graduate School of Information Sciences, Tohoku University
著者所属(英)
en
Cyberscience Center, Tohoku University
著者所属(英)
en
Cyberscience Center, Tohoku University / Graduate School of Information Sciences, Tohoku University
著者名 Joanna, Imada

× Joanna, Imada

Joanna, Imada

Search repository
Keichi, Takahashi

× Keichi, Takahashi

Keichi, Takahashi

Search repository
Yoichi, Shimomura

× Yoichi, Shimomura

Yoichi, Shimomura

Search repository
Hiroyuki, Takizawa

× Hiroyuki, Takizawa

Hiroyuki, Takizawa

Search repository
著者名(英) Joanna, Imada

× Joanna, Imada

en Joanna, Imada

Search repository
Keichi, Takahashi

× Keichi, Takahashi

en Keichi, Takahashi

Search repository
Yoichi, Shimomura

× Yoichi, Shimomura

en Yoichi, Shimomura

Search repository
Hiroyuki, Takizawa

× Hiroyuki, Takizawa

en Hiroyuki, Takizawa

Search repository
論文抄録
内容記述タイプ Other
内容記述 The C++ 17 standard introduced a set of parallel algorithms, referred to as Parallel STL, that is designed to be programmer-friendly and portable across CPU and GPU. Several studies compared the performance between GPU programming models including Parallel STL. However, the reasons behind the performance differences are not well discussed yet. This study thus investigates what causes the performance differences among GPU programming models: CUDA, Kokkos, OpenACC, OpenMP, and Parallel STL. Three benchmarks are selected to compare the models: BabelStream, Himeno benchmark, and CloverLeaf. In BabelStream, Parallel STL achieves similar performance to other models. In the Himeno benchmark, it achieves 12% higher performance than CUDA for the large problem size. However, for the largest problem size, it performs 23% worse than CUDA. Profiling reveals that Parallel STL has a low cache hit ratio compared to other models in the larger problem sizes.
論文抄録(英)
内容記述タイプ Other
内容記述 The C++ 17 standard introduced a set of parallel algorithms, referred to as Parallel STL, that is designed to be programmer-friendly and portable across CPU and GPU. Several studies compared the performance between GPU programming models including Parallel STL. However, the reasons behind the performance differences are not well discussed yet. This study thus investigates what causes the performance differences among GPU programming models: CUDA, Kokkos, OpenACC, OpenMP, and Parallel STL. Three benchmarks are selected to compare the models: BabelStream, Himeno benchmark, and CloverLeaf. In BabelStream, Parallel STL achieves similar performance to other models. In the Himeno benchmark, it achieves 12% higher performance than CUDA for the large problem size. However, for the largest problem size, it performs 23% worse than CUDA. Profiling reveals that Parallel STL has a low cache hit ratio compared to other models in the larger problem sizes.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN10463942
書誌情報 研究報告ハイパフォーマンスコンピューティング(HPC)

巻 2023-HPC-190, 号 2, p. 1-7, 発行日 2023-07-27
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8841
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 12:16:21.824150
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3