<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-03-08T14:27:50Z</responseDate>
  <request metadataPrefix="oai_dc" verb="GetRecord" identifier="oai:ipsj.ixsq.nii.ac.jp:00211654">https://ipsj.ixsq.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:ipsj.ixsq.nii.ac.jp:00211654</identifier>
        <datestamp>2025-01-19T17:41:01Z</datestamp>
        <setSpec>581:10433:10439</setSpec>
      </header>
      <metadata>
        <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns="http://www.w3.org/2001/XMLSchema" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
          <dc:title>咽喉マイクを用いた大語彙音声認識のための特徴マッピングによるデータ拡張と知識蒸留</dc:title>
          <dc:title>Feature Mapping-based Data Augmentation and Knowledge Distillation for Large Vocabulary Speech Recognition Using Throat Microphone</dc:title>
          <dc:creator>鈴木, 貴仁</dc:creator>
          <dc:creator>緒方, 淳</dc:creator>
          <dc:creator>綱川, 隆司</dc:creator>
          <dc:creator>西田, 昌史</dc:creator>
          <dc:creator>西村, 雅史</dc:creator>
          <dc:creator>Takahito, Suzuki</dc:creator>
          <dc:creator>Jun, Ogata</dc:creator>
          <dc:creator>Takashi, Tsunakawa</dc:creator>
          <dc:creator>Masafumi, Nishida</dc:creator>
          <dc:creator>Masafumi, Nishimura</dc:creator>
          <dc:subject>[一般論文] 咽喉マイク，音声認識，データ拡張，知識蒸留</dc:subject>
          <dc:description>咽喉マイクは接話マイクのような一般的なマイクよりも外部雑音に頑健であるが，一般的なマイクとの音響ミスマッチが大きく，通常の音声認識システムでは認識精度が低下する．また，大量の音声データが利用可能という状況にもない．本研究では接話マイクと咽喉マイクで同時収録した小規模パラレルデータを活用した咽喉マイク音声認識のための学習手法を提案する．提案手法では，まず既存の大規模音声データベースから抽出した接話マイク特徴量を咽喉マイクの特徴量空間にマッピングし，咽喉マイク用音響モデル（DNN-HMM）の学習データを拡張する．このとき特徴マッピングはパラレルデータを用いてLSTMによって学習する．続いて，特徴マッピングによって得た特徴量でDNN-HMMを初期学習し，これを生徒モデルとする．そして，大量の接話マイク特徴量で学習したDNN-HMMを教師モデルとし，知識蒸留に基づき生徒モデルの再学習を行う．読み上げ音声を用いた評価の結果，提案法は咽喉マイク音声のみで学習したDNN-HMMと比べて約36.5%の文字誤り率の削減を達成した．</dc:description>
          <dc:description>Throat microphones are more robust against external noise than conventional acoustic microphones such as close-talk. However, automatic speech recognition (ASR) performance is degraded when throat microphone speech signals are simply input to a general (clean) ASR system due to large acoustic mismatches. Moreover, the amount of throat microphone speech data is not enough to train accurate ASR systems. In this study, we propose a training approach for throat microphone ASR utilizing a small parallel corpus simultaneously recorded by close-talk and throat microphones. As a data-augmentation process, existing large-amount close-talk microphone features are transformed to a throat microphone feature space with the LSTM-based feature mapping which is trained from the parallel corpus. The DNN-HMM is then pre-trained with the mapped features, and fine-tuned by knowledge distillation from a DNN-HMM trained with a large amount of close-talk microphone speech data. Experimental results using read speech data showed that the proposed approach achieved 36.5% relative improvement of character error rate compared to the DNN-HMM trained only with throat microphone speech data.</dc:description>
          <dc:description>journal article</dc:description>
          <dc:date>2021-06-15</dc:date>
          <dc:format>application/pdf</dc:format>
          <dc:identifier>情報処理学会論文誌</dc:identifier>
          <dc:identifier>6</dc:identifier>
          <dc:identifier>62</dc:identifier>
          <dc:identifier>1373</dc:identifier>
          <dc:identifier>1381</dc:identifier>
          <dc:identifier>1882-7764</dc:identifier>
          <dc:identifier>AN00116647</dc:identifier>
          <dc:identifier>https://ipsj.ixsq.nii.ac.jp/record/211654/files/IPSJ-JNL6206004.pdf</dc:identifier>
          <dc:language>jpn</dc:language>
        </oai_dc:dc>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
