<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-03-17T03:32:06Z</responseDate>
  <request metadataPrefix="oai_dc" verb="GetRecord" identifier="oai:ipsj.ixsq.nii.ac.jp:00096748">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00096748</identifier>
        <datestamp>2025-01-21T13:10:37Z</datestamp>
        <setSpec>1164:5159:7047:7342</setSpec>
      </header>
      <metadata>
        <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns="http://www.w3.org/2001/XMLSchema" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
          <dc:title>話者依存型 Conditional Restricted Boltzmann Machine による声質変換</dc:title>
          <dc:title>Speaker-dependent conditionl restricted Boltzmann machine for voice conversion</dc:title>
          <dc:creator>中鹿, 亘</dc:creator>
          <dc:creator>滝口, 哲也</dc:creator>
          <dc:creator>有木, 康雄</dc:creator>
          <dc:creator>Toru, Nakashika</dc:creator>
          <dc:creator>Tetsuya, Takiguchi</dc:creator>
          <dc:creator>Yasuo, Ariki</dc:creator>
          <dc:subject>声質変換</dc:subject>
          <dc:description>本研究では，元の音響特徴量空間よりも音韻性や時間変化性を抑え，話者性を強調させることによって，より入力話者音声の声質を出力話者のものへと変換しやすい話者依存空間を形成することを目的として，話者ごとに conditional restricted Boltzmann machine (CRBM) を用いた声質変換法を提案する．提案手法ではまず初めに，話者ごとに用意した学習データ （パラレルデータである必要は無い） を用いて，入力話者，出力話者の CRBM を独立に学習させる．次に，少量のパラレルデータの音響特徴量を，それぞれの CRBM を通して話者依存高次元空間へ写像 (CRBM の前方推論） し，その高次特徴量同士を Neural Network (NN) を用いて変換させる．NN の変換で得られた特徴量は，CRBM の後方推論によって元の音響特徴量へ逆変換することが可能である．評価実験では，従来の GMM や NN，DBN を用いた声質変換法に比べて，主観的にも客観的にも良い精度が得られたことを確認した．</dc:description>
          <dc:description>In this paper, we present a voice conversion (VC) method that utilizes conditional restricted Boltzmann machines (CRBMs) for each speaker to obtain time-invariant speaker-independent spaces where voice features are converted more easily than those in an original acoustic feature space. First, we train two CRBMs for a source and target speaker independently using speaker-dependent training data (without the need to parallelize the training data). Then, a small number of parallel data are fed into each CRBM and the high-order features produced by the CRBMs are used to train a concatenating neural network (NN) between the two CRBMs. Finally, the entire network (the two CRBMs and the NN) is fine-tuned using the acoustic parallel data. Through voice-conversion experiments, we confirmed the high performance of our method in terms of objective and subjective evaluations, comparing it with conventional GMM, NN, and speaker-dependent DBN approaches.</dc:description>
          <dc:description>technical report</dc:description>
          <dc:publisher>情報処理学会</dc:publisher>
          <dc:date>2013-12-12</dc:date>
          <dc:format>application/pdf</dc:format>
          <dc:identifier>研究報告音声言語情報処理（SLP）</dc:identifier>
          <dc:identifier>14</dc:identifier>
          <dc:identifier>2013-SLP-99</dc:identifier>
          <dc:identifier>1</dc:identifier>
          <dc:identifier>6</dc:identifier>
          <dc:identifier>AN10442647</dc:identifier>
          <dc:identifier>https://ipsj.ixsq.nii.ac.jp/record/96748/files/IPSJ-SLP13099014.pdf</dc:identifier>
          <dc:language>jpn</dc:language>
        </oai_dc:dc>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
