Item type |
SIG Technical Reports(1) |
公開日 |
2024-06-07 |
タイトル |
|
|
タイトル |
Beyond Word Count: Exploring Approximated Target Lengths for CIF-RNNT |
タイトル |
|
|
言語 |
en |
|
タイトル |
Beyond Word Count: Exploring Approximated Target Lengths for CIF-RNNT |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
ポスターセッション1 |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
The University of Electro-Communications |
著者所属 |
|
|
|
The University of Electro-Communications |
著者所属(英) |
|
|
|
en |
|
|
The University of Electro-Communications |
著者所属(英) |
|
|
|
en |
|
|
The University of Electro-Communications |
著者名 |
Wen, Shen Teo
Yasuhiro, Minami
|
著者名(英) |
Wen, Shen Teo
Yasuhiro, Minami
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Our previous work proposed the CIF-RNNT architecture, a combination of Continuous Integrate-and-Fire (CIF) and RNN-Transducers (RNN-T) that compresses speech into units equivalent to linguistic words to achieve efficient decoding. This work extends on that research by investigating the impact of different target length definitions, approximated from self-information and token count. Our results on LibriSpeech and CSJ show that approximated target length types based on self-information outperform simpler approaches, and CIF-RNNT models even surpass topline models on the CSJ dataset at smaller chunk sizes. Furthermore, our comparisons demonstrate an inherent ability of CIF-RNNT to produce output tokens in groups of words, regardless of the target length type. These results showcase the potential of the CIF-RNNT architecture for efficient and accurate speech recognition. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Our previous work proposed the CIF-RNNT architecture, a combination of Continuous Integrate-and-Fire (CIF) and RNN-Transducers (RNN-T) that compresses speech into units equivalent to linguistic words to achieve efficient decoding. This work extends on that research by investigating the impact of different target length definitions, approximated from self-information and token count. Our results on LibriSpeech and CSJ show that approximated target length types based on self-information outperform simpler approaches, and CIF-RNNT models even surpass topline models on the CSJ dataset at smaller chunk sizes. Furthermore, our comparisons demonstrate an inherent ability of CIF-RNNT to produce output tokens in groups of words, regardless of the target length type. These results showcase the potential of the CIF-RNNT architecture for efficient and accurate speech recognition. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10438388 |
書誌情報 |
研究報告音楽情報科学(MUS)
巻 2024-MUS-140,
号 37,
p. 1-5,
発行日 2024-06-07
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8752 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |