<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-03-06T00:07:27Z</responseDate>
  <request metadataPrefix="oai_dc" verb="GetRecord" identifier="oai:ipsj.ixsq.nii.ac.jp:00190711">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00190711</identifier>
        <datestamp>2025-01-20T01:06:07Z</datestamp>
        <setSpec>1164:1579:9341:9527</setSpec>
      </header>
      <metadata>
        <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns="http://www.w3.org/2001/XMLSchema" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
          <dc:title>Adaptation of Ray, a distributed framework for machine learning, to MPI-based environment</dc:title>
          <dc:title>Adaptation of Ray, a distributed framework for machine learning, to MPI-based environment</dc:title>
          <dc:creator>Tianlun, Wang</dc:creator>
          <dc:creator>Yusuke, Tanimura</dc:creator>
          <dc:creator>Hidemoto, Nakada</dc:creator>
          <dc:creator>Tianlun, Wang</dc:creator>
          <dc:creator>Yusuke, Tanimura</dc:creator>
          <dc:creator>Hidemoto, Nakada</dc:creator>
          <dc:subject>機械学習・ニューラルネットワーク</dc:subject>
          <dc:description>Ray is a distributed framework for machine learning that targets reinforcement learning using multiple nodes. While it works well on loosely coupled nodes, it does not take into account the "high-performance computing environment " based on MPI. We modified Ray so that, 1) it works well with the MPI launch mechanism, and 2) it use MPI communication for large data transfer. We evaluated the modified version of Ray on a cluster and confirmed the preliminary performance.</dc:description>
          <dc:description>Ray is a distributed framework for machine learning that targets reinforcement learning using multiple nodes. While it works well on loosely coupled nodes, it does not take into account the "high-performance computing environment " based on MPI. We modified Ray so that, 1) it works well with the MPI launch mechanism, and 2) it use MPI communication for large data transfer. We evaluated the modified version of Ray on a cluster and confirmed the preliminary performance.</dc:description>
          <dc:description>technical report</dc:description>
          <dc:publisher>情報処理学会</dc:publisher>
          <dc:date>2018-07-23</dc:date>
          <dc:format>application/pdf</dc:format>
          <dc:identifier>研究報告システム・アーキテクチャ（ARC）</dc:identifier>
          <dc:identifier>29</dc:identifier>
          <dc:identifier>2018-ARC-232</dc:identifier>
          <dc:identifier>1</dc:identifier>
          <dc:identifier>6</dc:identifier>
          <dc:identifier>2188-8574</dc:identifier>
          <dc:identifier>AN10096105</dc:identifier>
          <dc:identifier>https://ipsj.ixsq.nii.ac.jp/record/190711/files/IPSJ-ARC18232029.pdf</dc:identifier>
          <dc:language>eng</dc:language>
        </oai_dc:dc>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
