ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング


インデックスリンク

インデックスツリー

  • RootNode

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 論文誌(ジャーナル)
  2. Vol.54
  3. No.2

Collecting Colloquial and Spontaneous-like Sentences from Web Resources for Constructing Chinese Language Models of Speech Recognition

https://ipsj.ixsq.nii.ac.jp/records/90265
https://ipsj.ixsq.nii.ac.jp/records/90265
34087294-f444-4b1f-beda-ce0bd9180f73
名前 / ファイル ライセンス アクション
IPSJ-JNL5402005.pdf IPSJ-JNL5402005 (1.2 MB)
Copyright (c) 2013 by the Information Processing Society of Japan
オープンアクセス
Item type Journal(1)
公開日 2013-02-15
タイトル
タイトル Collecting Colloquial and Spontaneous-like Sentences from Web Resources for Constructing Chinese Language Models of Speech Recognition
タイトル
言語 en
タイトル Collecting Colloquial and Spontaneous-like Sentences from Web Resources for Constructing Chinese Language Models of Speech Recognition
言語
言語 eng
キーワード
主題Scheme Other
主題 [特集:音声ドキュメント処理] spontaneous text collection, the Web data, Chinese language model, automatic speech recognition
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_6501
資源タイプ journal article
著者所属
National Institute of Information and Communications Technology (NICT)
著者所属
National Institute of Information and Communications Technology (NICT)
著者所属
National Institute of Information and Communications Technology (NICT)
著者所属
National Institute of Information and Communications Technology (NICT)
著者所属(英)
en
National Institute of Information and Communications Technology (NICT)
著者所属(英)
en
National Institute of Information and Communications Technology (NICT)
著者所属(英)
en
National Institute of Information and Communications Technology (NICT)
著者所属(英)
en
National Institute of Information and Communications Technology (NICT)
著者名 Xinhui, Hu

× Xinhui, Hu

Xinhui, Hu

Search repository
Shigeki, Matsuda

× Shigeki, Matsuda

Shigeki, Matsuda

Search repository
Chori, Hori

× Chori, Hori

Chori, Hori

Search repository
Hideki, Kashioka

× Hideki, Kashioka

Hideki, Kashioka

Search repository
著者名(英) Xinhui, Hu

× Xinhui, Hu

en Xinhui, Hu

Search repository
Shigeki, Matsuda

× Shigeki, Matsuda

en Shigeki, Matsuda

Search repository
Chori, Hori

× Chori, Hori

en Chori, Hori

Search repository
Hideki, Kashioka

× Hideki, Kashioka

en Hideki, Kashioka

Search repository
論文抄録
内容記述タイプ Other
内容記述 In this paper, we present our work on collecting training texts from the Web for constructing language models in colloquial and spontaneous Chinese automatic speech recognition systems. The selection involves two steps: first, web texts are selected using a perplexity-based approach in which the style-related words are strengthened by omitting infrequent topic words. Second, the selected texts are then clustered based on non-noun part-of-speech words and optimal clusters are chosen by referring to a set of spontaneous seed sentences. With the proposed method, we selected over 3.80 M sentences. By qualitative analysis on the selected results, the colloquial and spontaneous-speech like texts are effectively selected. The effectiveness of the selection is also quantitatively verified by the speech recognition experiments. Using the language model interpolated with the one trained by these selected sentences and a baseline model, speech recognition evaluations were conducted on an open domain colloquial and spontaneous test set. We effectively reduced the character error rate 4.0% over the baseline model meanwhile the word coverage was also greatly increased. We also verified that the proposed method is superior to a conventional perplexity-based approach with a difference of 1.57% in character error rate.

------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.21(2013) No.2 (online)
DOI http://dx.doi.org/10.2197/ipsjjip.21.168
------------------------------
論文抄録(英)
内容記述タイプ Other
内容記述 In this paper, we present our work on collecting training texts from the Web for constructing language models in colloquial and spontaneous Chinese automatic speech recognition systems. The selection involves two steps: first, web texts are selected using a perplexity-based approach in which the style-related words are strengthened by omitting infrequent topic words. Second, the selected texts are then clustered based on non-noun part-of-speech words and optimal clusters are chosen by referring to a set of spontaneous seed sentences. With the proposed method, we selected over 3.80 M sentences. By qualitative analysis on the selected results, the colloquial and spontaneous-speech like texts are effectively selected. The effectiveness of the selection is also quantitatively verified by the speech recognition experiments. Using the language model interpolated with the one trained by these selected sentences and a baseline model, speech recognition evaluations were conducted on an open domain colloquial and spontaneous test set. We effectively reduced the character error rate 4.0% over the baseline model meanwhile the word coverage was also greatly increased. We also verified that the proposed method is superior to a conventional perplexity-based approach with a difference of 1.57% in character error rate.

------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.21(2013) No.2 (online)
DOI http://dx.doi.org/10.2197/ipsjjip.21.168
------------------------------
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN00116647
書誌情報 情報処理学会論文誌

巻 54, 号 2, 発行日 2013-02-15
ISSN
収録物識別子タイプ ISSN
収録物識別子 1882-7764
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-20 06:49:29.907794
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

Xinhui, Hu, Shigeki, Matsuda, Chori, Hori, Hideki, Kashioka, 2013.

Loading...

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3