<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-04-13T12:21:54Z</responseDate>
  <request identifier="oai:ipsj.ixsq.nii.ac.jp:00082203" metadataPrefix="jpcoar_1.0" verb="GetRecord">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00082203</identifier>
        <datestamp>2025-01-21T19:07:22Z</datestamp>
        <setSpec>6164:6165:6426:6780</setSpec>
      </header>
      <metadata>
        <jpcoar:jpcoar xmlns:datacite="https://schema.datacite.org/meta/kernel-4/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcndl="http://ndl.go.jp/dcndl/terms/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:jpcoar="https://github.com/JPCOAR/schema/blob/master/1.0/" xmlns:oaire="http://namespace.openaire.eu/schema/oaire/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rioxxterms="http://www.rioxx.net/schema/v2.0/rioxxterms/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="https://github.com/JPCOAR/schema/blob/master/1.0/" xsi:schemaLocation="https://github.com/JPCOAR/schema/blob/master/1.0/jpcoar_scm.xsd">
          <dc:title>「京」のためのMPI通信機構の設計</dc:title>
          <dc:title xml:lang="en">The Design of MPI Communication Facility for K computer</dc:title>
          <jpcoar:creator>
            <jpcoar:creatorName>住元, 真司</jpcoar:creatorName>
            <jpcoar:creatorName>川島, 崇裕</jpcoar:creatorName>
            <jpcoar:creatorName>志田, 直之</jpcoar:creatorName>
            <jpcoar:creatorName>岡本, 高幸</jpcoar:creatorName>
            <jpcoar:creatorName>三浦, 健一</jpcoar:creatorName>
            <jpcoar:creatorName>宇野, 篤也</jpcoar:creatorName>
            <jpcoar:creatorName>黒川, 原佳</jpcoar:creatorName>
            <jpcoar:creatorName>庄司, 文由</jpcoar:creatorName>
            <jpcoar:creatorName>横川, 三津夫</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Shinji, Sumimoto</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="en">Takahiro, Kawashima</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="en">Naoyuki, Shida</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="en">Takayuki, Okamoto</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="en">Kenichi, Miura</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="en">Atsuya, Uno</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="en">Motoyoshi, Kurokawa</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="en">Fumiyoshi, Shouji</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="en">Mitsuo, Yokokawa</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:subject subjectScheme="Other">HPCシステム</jpcoar:subject>
          <datacite:description descriptionType="Other">本論文では82,944ノードの「京」上で使用メモリ量を極小化しながらMPI通信性能を高める通信機構の設計について述べる。「京」が採用したTofuインタコネクトは数十万ノードクラスのシステムで高い性能と耐故障性を実現するため直接網である6次元トーラス・メッシュ網を採用している。しかし、超大規模の直接網システムでは、通信ホップ数増加とネットワーク網でのメッセージ衝突による通信遅延の増加による通信性能の低下、ならびに、ノード数に比例して必要な使用メモリ量の増加が課題となる。この課題を解決するため、RDMA通信を主体とし、通信バッファが必要な通信は隣接通信などよく利用される通信経路に絞る、遅延が大きな場合は省メモリ性を重視する通信方式やアルゴリズムを採用している。これらの設計により、「京」のMPIライブラリにおいては、使用メモリを抑制しながら、MPI通信遅延1.27us，MPIバンド幅4.7GB/sを達成している。集団通信においても9,216ノードのMPI Bcastで10.6GB/sと高い通信性能を実現している。</datacite:description>
          <datacite:description descriptionType="Other">This paper describes the design of high performance MPI communication which enables high performance communication with minimized memory usage on the 82,944 node K computer. The Tofu interconnect of K computer uses six dimension torus/mesh direct topology for realizing higher performance and availability on hundreds thousand node system. However, in such a ultra scale system, communication performance degradation by increasing hops and network congestion and much memory consumption with increasing number of nodes are still problems to solve. To solve the problems, MPI communication facility of the K computer uses RDMA based communication and buffer allocation policy that limits buffer based communication to neighbor communication and the other uses less-memory usage communication method. As a result of these designs, MPI communication facility of the K computer realizes 1.27us MPI communication latency and 4.7GB/s MPI communication bandwidth with less memory usage, and a collective communication of MPI Bcast achieves 10.6GB/s on 9,216 node K computer.</datacite:description>
          <dc:publisher xml:lang="ja">情報処理学会</dc:publisher>
          <datacite:date dateType="Issued">2012-05-09</datacite:date>
          <dc:language>jpn</dc:language>
          <dc:type rdf:resource="http://purl.org/coar/resource_type/c_5794">conference paper</dc:type>
          <jpcoar:identifier identifierType="URI">https://ipsj.ixsq.nii.ac.jp/records/82203</jpcoar:identifier>
          <jpcoar:sourceTitle>先進的計算基盤システムシンポジウム論文集</jpcoar:sourceTitle>
          <jpcoar:volume>2012</jpcoar:volume>
          <jpcoar:pageStart>237</jpcoar:pageStart>
          <jpcoar:pageEnd>244</jpcoar:pageEnd>
          <jpcoar:file>
            <jpcoar:URI>https://ipsj.ixsq.nii.ac.jp/record/82203/files/IPSJ-SACSIS2012062.pdf</jpcoar:URI>
            <jpcoar:mimeType>application/pdf</jpcoar:mimeType>
            <jpcoar:extent>1.2 MB</jpcoar:extent>
            <datacite:date dateType="Available">2014-05-09</datacite:date>
          </jpcoar:file>
        </jpcoar:jpcoar>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
