<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-04-12T20:02:33Z</responseDate>
  <request identifier="oai:ipsj.ixsq.nii.ac.jp:00209715" metadataPrefix="jpcoar_1.0" verb="GetRecord">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00209715</identifier>
        <datestamp>2025-01-19T18:25:39Z</datestamp>
        <setSpec>1164:2735:10526:10527</setSpec>
      </header>
      <metadata>
        <jpcoar:jpcoar xmlns:datacite="https://schema.datacite.org/meta/kernel-4/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcndl="http://ndl.go.jp/dcndl/terms/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:jpcoar="https://github.com/JPCOAR/schema/blob/master/1.0/" xmlns:oaire="http://namespace.openaire.eu/schema/oaire/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rioxxterms="http://www.rioxx.net/schema/v2.0/rioxxterms/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="https://github.com/JPCOAR/schema/blob/master/1.0/" xsi:schemaLocation="https://github.com/JPCOAR/schema/blob/master/1.0/jpcoar_scm.xsd">
          <dc:title>Generating Intrinsic Rewards by Random Recurrent Network Distillation</dc:title>
          <dc:title xml:lang="en">Generating Intrinsic Rewards by Random Recurrent Network Distillation</dc:title>
          <jpcoar:creator>
            <jpcoar:creatorName>Zefeng, Xu</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>Koichi, Moriyama</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>Tohgoroh, Matsui</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>Atsuko, Mutoh</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>Nobuhiro, Inuzuka</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Zefeng, Xu</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Koichi, Moriyama</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Tohgoroh, Matsui</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Atsuko, Mutoh</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Nobuhiro, Inuzuka</jpcoar:creatorName>
          </jpcoar:creator>
          <datacite:description descriptionType="Other">Exploration in sparse reward environments pose significant challenges for many reinforcement learning algorithms. Rather than solely relying on extrinsic rewards provided by environments, many state-of-the-art methods generate intrinsic rewards to encourage the agent explore the environments. However, we found that existing models fall short in some environments, where the agent must visit a same state more than once. Thus, we improve an existing model to propose a novel type of intrinsic exploration bonus which will reward the agent when a new sequence is discovered. The intrinsic reward is the error of a recurrent neural network predicting features of the sequences given by a fixed randomly initialized recurrent neural network. Our approach performs well in some Atari games where conditions must be fulfilled to develop stories.</datacite:description>
          <datacite:description descriptionType="Other">Exploration in sparse reward environments pose significant challenges for many reinforcement learning algorithms. Rather than solely relying on extrinsic rewards provided by environments, many state-of-the-art methods generate intrinsic rewards to encourage the agent explore the environments. However, we found that existing models fall short in some environments, where the agent must visit a same state more than once. Thus, we improve an existing model to propose a novel type of intrinsic exploration bonus which will reward the agent when a new sequence is discovered. The intrinsic reward is the error of a recurrent neural network predicting features of the sequences given by a fixed randomly initialized recurrent neural network. Our approach performs well in some Atari games where conditions must be fulfilled to develop stories.</datacite:description>
          <dc:publisher xml:lang="ja">情報処理学会</dc:publisher>
          <datacite:date dateType="Issued">2021-02-22</datacite:date>
          <dc:language>eng</dc:language>
          <dc:type rdf:resource="http://purl.org/coar/resource_type/c_18gh">technical report</dc:type>
          <jpcoar:identifier identifierType="URI">https://ipsj.ixsq.nii.ac.jp/records/209715</jpcoar:identifier>
          <jpcoar:sourceIdentifier identifierType="ISSN">2188-8833</jpcoar:sourceIdentifier>
          <jpcoar:sourceIdentifier identifierType="NCID">AN10505667</jpcoar:sourceIdentifier>
          <jpcoar:sourceTitle>研究報告数理モデル化と問題解決（MPS）</jpcoar:sourceTitle>
          <jpcoar:volume>2021-MPS-132</jpcoar:volume>
          <jpcoar:issue>15</jpcoar:issue>
          <jpcoar:pageStart>1</jpcoar:pageStart>
          <jpcoar:pageEnd>6</jpcoar:pageEnd>
          <jpcoar:file>
            <jpcoar:URI label="IPSJ-MPS21132015.pdf">https://ipsj.ixsq.nii.ac.jp/record/209715/files/IPSJ-MPS21132015.pdf</jpcoar:URI>
            <jpcoar:mimeType>application/pdf</jpcoar:mimeType>
            <jpcoar:extent>1.6 MB</jpcoar:extent>
            <datacite:date dateType="Available">2023-02-22</datacite:date>
          </jpcoar:file>
        </jpcoar:jpcoar>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
