WEKO3
-
RootNode
アイテム
Collecting Colloquial and Spontaneous-like Sentences from Web Resources for Constructing Chinese Language Models of Speech Recognition
https://ipsj.ixsq.nii.ac.jp/records/90265
https://ipsj.ixsq.nii.ac.jp/records/9026534087294-f444-4b1f-beda-ce0bd9180f73
名前 / ファイル | ライセンス | アクション |
---|---|---|
![]() |
Copyright (c) 2013 by the Information Processing Society of Japan
|
|
オープンアクセス |
Item type | Journal(1) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
公開日 | 2013-02-15 | |||||||||||||
タイトル | ||||||||||||||
タイトル | Collecting Colloquial and Spontaneous-like Sentences from Web Resources for Constructing Chinese Language Models of Speech Recognition | |||||||||||||
タイトル | ||||||||||||||
言語 | en | |||||||||||||
タイトル | Collecting Colloquial and Spontaneous-like Sentences from Web Resources for Constructing Chinese Language Models of Speech Recognition | |||||||||||||
言語 | ||||||||||||||
言語 | eng | |||||||||||||
キーワード | ||||||||||||||
主題Scheme | Other | |||||||||||||
主題 | [特集:音声ドキュメント処理] spontaneous text collection, the Web data, Chinese language model, automatic speech recognition | |||||||||||||
資源タイプ | ||||||||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_6501 | |||||||||||||
資源タイプ | journal article | |||||||||||||
著者所属 | ||||||||||||||
National Institute of Information and Communications Technology (NICT) | ||||||||||||||
著者所属 | ||||||||||||||
National Institute of Information and Communications Technology (NICT) | ||||||||||||||
著者所属 | ||||||||||||||
National Institute of Information and Communications Technology (NICT) | ||||||||||||||
著者所属 | ||||||||||||||
National Institute of Information and Communications Technology (NICT) | ||||||||||||||
著者所属(英) | ||||||||||||||
en | ||||||||||||||
National Institute of Information and Communications Technology (NICT) | ||||||||||||||
著者所属(英) | ||||||||||||||
en | ||||||||||||||
National Institute of Information and Communications Technology (NICT) | ||||||||||||||
著者所属(英) | ||||||||||||||
en | ||||||||||||||
National Institute of Information and Communications Technology (NICT) | ||||||||||||||
著者所属(英) | ||||||||||||||
en | ||||||||||||||
National Institute of Information and Communications Technology (NICT) | ||||||||||||||
著者名 |
Xinhui, Hu
× Xinhui, Hu
× Shigeki, Matsuda
× Chori, Hori
× Hideki, Kashioka
|
|||||||||||||
著者名(英) |
Xinhui, Hu
× Xinhui, Hu
× Shigeki, Matsuda
× Chori, Hori
× Hideki, Kashioka
|
|||||||||||||
論文抄録 | ||||||||||||||
内容記述タイプ | Other | |||||||||||||
内容記述 | In this paper, we present our work on collecting training texts from the Web for constructing language models in colloquial and spontaneous Chinese automatic speech recognition systems. The selection involves two steps: first, web texts are selected using a perplexity-based approach in which the style-related words are strengthened by omitting infrequent topic words. Second, the selected texts are then clustered based on non-noun part-of-speech words and optimal clusters are chosen by referring to a set of spontaneous seed sentences. With the proposed method, we selected over 3.80 M sentences. By qualitative analysis on the selected results, the colloquial and spontaneous-speech like texts are effectively selected. The effectiveness of the selection is also quantitatively verified by the speech recognition experiments. Using the language model interpolated with the one trained by these selected sentences and a baseline model, speech recognition evaluations were conducted on an open domain colloquial and spontaneous test set. We effectively reduced the character error rate 4.0% over the baseline model meanwhile the word coverage was also greatly increased. We also verified that the proposed method is superior to a conventional perplexity-based approach with a difference of 1.57% in character error rate. ------------------------------ This is a preprint of an article intended for publication Journal of Information Processing(JIP). This preprint should not be cited. This article should be cited as: Journal of Information Processing Vol.21(2013) No.2 (online) DOI http://dx.doi.org/10.2197/ipsjjip.21.168 ------------------------------ |
|||||||||||||
論文抄録(英) | ||||||||||||||
内容記述タイプ | Other | |||||||||||||
内容記述 | In this paper, we present our work on collecting training texts from the Web for constructing language models in colloquial and spontaneous Chinese automatic speech recognition systems. The selection involves two steps: first, web texts are selected using a perplexity-based approach in which the style-related words are strengthened by omitting infrequent topic words. Second, the selected texts are then clustered based on non-noun part-of-speech words and optimal clusters are chosen by referring to a set of spontaneous seed sentences. With the proposed method, we selected over 3.80 M sentences. By qualitative analysis on the selected results, the colloquial and spontaneous-speech like texts are effectively selected. The effectiveness of the selection is also quantitatively verified by the speech recognition experiments. Using the language model interpolated with the one trained by these selected sentences and a baseline model, speech recognition evaluations were conducted on an open domain colloquial and spontaneous test set. We effectively reduced the character error rate 4.0% over the baseline model meanwhile the word coverage was also greatly increased. We also verified that the proposed method is superior to a conventional perplexity-based approach with a difference of 1.57% in character error rate. ------------------------------ This is a preprint of an article intended for publication Journal of Information Processing(JIP). This preprint should not be cited. This article should be cited as: Journal of Information Processing Vol.21(2013) No.2 (online) DOI http://dx.doi.org/10.2197/ipsjjip.21.168 ------------------------------ |
|||||||||||||
書誌レコードID | ||||||||||||||
収録物識別子タイプ | NCID | |||||||||||||
収録物識別子 | AN00116647 | |||||||||||||
書誌情報 |
情報処理学会論文誌 巻 54, 号 2, 発行日 2013-02-15 |
|||||||||||||
ISSN | ||||||||||||||
収録物識別子タイプ | ISSN | |||||||||||||
収録物識別子 | 1882-7764 |