Item type |
SIG Technical Reports(1) |
公開日 |
2015-05-16 |
タイトル |
|
|
タイトル |
Unsupervised pronunciation disambiguation of language model training corpora |
タイトル |
|
|
言語 |
en |
|
タイトル |
Unsupervised pronunciation disambiguation of language model training corpora |
言語 |
|
|
言語 |
eng |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
IBM Research - Tokyo |
著者所属 |
|
|
|
IBM Research - Tokyo |
著者所属 |
|
|
|
IBM Research - Tokyo |
著者所属 |
|
|
|
IBM Research - Tokyo/Presently with Shizuoka University |
著者所属 |
|
|
|
Nuance Communications Inc. |
著者所属 |
|
|
|
Nuance Communications Inc. |
著者所属(英) |
|
|
|
en |
|
|
IBM Research - Tokyo |
著者所属(英) |
|
|
|
en |
|
|
IBM Research - Tokyo |
著者所属(英) |
|
|
|
en |
|
|
IBM Research - Tokyo |
著者所属(英) |
|
|
|
en |
|
|
IBM Research - Tokyo / Presently with Shizuoka University |
著者所属(英) |
|
|
|
en |
|
|
Nuance Communications Inc. |
著者所属(英) |
|
|
|
en |
|
|
Nuance Communications Inc. |
著者名 |
Ryuki, Tachibana
Nobuyasu, Itoh
Gakuto, Kurata
Masafumi, Nishimura
Nicola, Ueffing
Daniel, Willett
|
著者名(英) |
Ryuki, Tachibana
Nobuyasu, Itoh
Gakuto, Kurata
Masafumi, Nishimura
Nicola, Ueffing
Daniel, Willett
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
It is known that pronunciation probability estimation by an LM can improve the recognition accuracy of an ASR system. However, because training of such an LM usually requires manual preparation of corpora with pronunciation information, which is very costly, it is still a standard approach in the ASR research field to assume the same probability for all of the possible pronunciations of each word. In this paper, we avoid the cost by training a context-dependent pronunciation model in an unsupervised manner based on the recognition results of a large amount of user speech data. With this model, we can disambiguate the pronunciations of the sentences in the LM corpus. We also combine the model with a TTS frontend module to compensate for its inaccuracies. We will present results on a Japanese LVCSR task with a gain of 3.9% CERR. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
It is known that pronunciation probability estimation by an LM can improve the recognition accuracy of an ASR system. However, because training of such an LM usually requires manual preparation of corpora with pronunciation information, which is very costly, it is still a standard approach in the ASR research field to assume the same probability for all of the possible pronunciations of each word. In this paper, we avoid the cost by training a context-dependent pronunciation model in an unsupervised manner based on the recognition results of a large amount of user speech data. With this model, we can disambiguate the pronunciations of the sentences in the LM corpus. We also combine the model with a TTS frontend module to compensate for its inaccuracies. We will present results on a Japanese LVCSR task with a gain of 3.9% CERR. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10438388 |
書誌情報 |
研究報告音楽情報科学(MUS)
巻 2015-MUS-107,
号 65,
p. 1-4,
発行日 2015-05-16
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8752 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |