<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-04-20T19:50:18Z</responseDate>
  <request verb="GetRecord" metadataPrefix="jpcoar_1.0" identifier="oai:ipsj.ixsq.nii.ac.jp:00240076">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00240076</identifier>
        <datestamp>2025-01-19T08:05:41Z</datestamp>
        <setSpec>6164:6165:7006:11799</setSpec>
      </header>
      <metadata>
        <jpcoar:jpcoar xmlns:datacite="https://schema.datacite.org/meta/kernel-4/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcndl="http://ndl.go.jp/dcndl/terms/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:jpcoar="https://github.com/JPCOAR/schema/blob/master/1.0/" xmlns:oaire="http://namespace.openaire.eu/schema/oaire/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rioxxterms="http://www.rioxx.net/schema/v2.0/rioxxterms/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="https://github.com/JPCOAR/schema/blob/master/1.0/" xsi:schemaLocation="https://github.com/JPCOAR/schema/blob/master/1.0/jpcoar_scm.xsd">
          <dc:title>A Case-based Reward Function Design for Reinforcement Learning-based Pure Pursuit Hybrid Controller</dc:title>
          <dc:title xml:lang="en">A Case-based Reward Function Design for Reinforcement Learning-based Pure Pursuit Hybrid Controller</dc:title>
          <jpcoar:creator>
            <jpcoar:creatorName>Pang, Lixin</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>Huang, Jianyu</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>Arakawa, Yutaka</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Pang, Lixin</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Huang, Jianyu</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">Arakawa, Yutaka</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:subject subjectScheme="Other">Reinforcement learning, Autonomous driving, Reward function, Adaptive pure pursuit, Path tracking</jpcoar:subject>
          <datacite:description descriptionType="Other">This paper presents an innovative approach to enhancing the Pure Pursuit algorithm for path tracking in autonomous vehicles by integrating Reinforcement Learning and curvature information. Traditional Pure Pursuit algorithms, while effective in low-speed scenarios, often require extensive manual tuning of the look-ahead distance to maintain tracking accuracy at varying speeds and complex paths. To address these limitations, we designed an RL-based pure pursuit controller incorporating future curvature into the state space and reward function to enhance learning a proper tracking policy at higher speeds. The controller is trained and evaluated in the CARLA simulator, demonstrating improved performance in terms of path-tracking accuracy and stability across different speeds and path complexities. By comparing the controller which considered curvature improvement with the original one, our results show that the improved method can achieve lower lateral deviation and lateral acceleration while maintaining almost the same average speed.</datacite:description>
          <datacite:description descriptionType="Other">This paper presents an innovative approach to enhancing the Pure Pursuit algorithm for path tracking in autonomous vehicles by integrating Reinforcement Learning and curvature information. Traditional Pure Pursuit algorithms, while effective in low-speed scenarios, often require extensive manual tuning of the look-ahead distance to maintain tracking accuracy at varying speeds and complex paths. To address these limitations, we designed an RL-based pure pursuit controller incorporating future curvature into the state space and reward function to enhance learning a proper tracking policy at higher speeds. The controller is trained and evaluated in the CARLA simulator, demonstrating improved performance in terms of path-tracking accuracy and stability across different speeds and path complexities. By comparing the controller which considered curvature improvement with the original one, our results show that the improved method can achieve lower lateral deviation and lateral acceleration while maintaining almost the same average speed.</datacite:description>
          <dc:publisher xml:lang="ja">情報処理学会</dc:publisher>
          <datacite:date dateType="Issued">2024-10-23</datacite:date>
          <dc:language>eng</dc:language>
          <dc:type rdf:resource="http://purl.org/coar/resource_type/c_5794">conference paper</dc:type>
          <jpcoar:identifier identifierType="URI">https://ipsj.ixsq.nii.ac.jp/records/240076</jpcoar:identifier>
          <jpcoar:sourceTitle>第32回マルチメディア通信と分散処理ワークショップ論文集</jpcoar:sourceTitle>
          <jpcoar:pageStart>71</jpcoar:pageStart>
          <jpcoar:pageEnd>77</jpcoar:pageEnd>
          <jpcoar:file>
            <jpcoar:URI label="IPSJ-DPSWS20240010.pdf">https://ipsj.ixsq.nii.ac.jp/record/240076/files/IPSJ-DPSWS20240010.pdf</jpcoar:URI>
            <jpcoar:mimeType>application/pdf</jpcoar:mimeType>
            <jpcoar:extent>4.2 MB</jpcoar:extent>
            <datacite:date dateType="Available">2026-10-23</datacite:date>
          </jpcoar:file>
        </jpcoar:jpcoar>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
