<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-04-14T10:07:18Z</responseDate>
  <request identifier="oai:ipsj.ixsq.nii.ac.jp:00211705" metadataPrefix="oai_dc" verb="GetRecord">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00211705</identifier>
        <datestamp>2025-01-19T17:42:37Z</datestamp>
        <setSpec>1164:5352:10544:10612</setSpec>
      </header>
      <metadata>
        <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns="http://www.w3.org/2001/XMLSchema" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
          <dc:title>シンプレクティック数値積分法を用いたNeural ODE の学習</dc:title>
          <dc:title>Training Neural ODE by Symplectic Integrator</dc:title>
          <dc:creator>松原, 崇</dc:creator>
          <dc:creator>宮武, 勇登</dc:creator>
          <dc:creator>谷口, 隆晴</dc:creator>
          <dc:creator>Graduate, School of Engineering Science  Osaka University</dc:creator>
          <dc:creator>Cybermedia Center  Osaka University</dc:creator>
          <dc:creator>Graduate School of System Informatics  Kobe University</dc:creator>
          <dc:subject>深層学習・行列分解</dc:subject>
          <dc:description>ニューラルネットワークで微分方程式を学習する neural ODE は，連続時間のダイナミカルシステムや確率分布を，高い精度でモデル化できる．しかし同じニューラルネットワークを何度も使うため，誤差逆伝播法で訓練するには非常に大きなメモリが必要になる．そのため数値積分で誤差逆伝播法を行う随伴法が用いられるが，数値誤差か大きな計算コストのどちらかが問題となる．本研究では随伴法に適切なチェックポイント法とシンプレクティック 数値積分法を用いることで，省メモリ性と速度を両立させる手法を提案する．</dc:description>
          <dc:description>A diﬀerential equation model using neural networks, neural ODE, enables use to model a continuous-time dynamics and probabilistic model with high accuracy. However, the neural ODE uses the same neural network repeatedly, the training using the backpropagation algorithm consumes large memory. Instead of the backpropagation algorithm, the adjoint method is commonly used, which obtains the gradient using the numerical integration. The adjoint method needs a small step size and much computational cost to suppress the numerical errors. In this study, we combine the checkpointing scheme and symplectic integrator for the adjoint method. It suppresses the memory consumption and functions faster.</dc:description>
          <dc:description>technical report</dc:description>
          <dc:publisher>情報処理学会</dc:publisher>
          <dc:date>2021-06-21</dc:date>
          <dc:format>application/pdf</dc:format>
          <dc:identifier>研究報告バイオ情報学（BIO）</dc:identifier>
          <dc:identifier>2</dc:identifier>
          <dc:identifier>2021-BIO-66</dc:identifier>
          <dc:identifier>1</dc:identifier>
          <dc:identifier>6</dc:identifier>
          <dc:identifier>2188-8590</dc:identifier>
          <dc:identifier>AA12055912</dc:identifier>
          <dc:identifier>https://ipsj.ixsq.nii.ac.jp/record/211705/files/IPSJ-BIO21066002.pdf</dc:identifier>
          <dc:language>jpn</dc:language>
        </oai_dc:dc>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
