| Item type |
SIG Technical Reports(1) |
| 公開日 |
2024-02-22 |
| タイトル |
|
|
タイトル |
Low-resource Speech Recognition using Hierarchical CTC and Large Pre-trained Model |
| タイトル |
|
|
言語 |
en |
|
タイトル |
Low-resource Speech Recognition using Hierarchical CTC and Large Pre-trained Model |
| 言語 |
|
|
言語 |
eng |
| キーワード |
|
|
主題Scheme |
Other |
|
主題 |
ポスターセッション2 SP/SLP |
| 資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
| 著者所属 |
|
|
|
Graduate School of Informatics, Kyoto University |
| 著者所属 |
|
|
|
Graduate School of Informatics, Kyoto University |
| 著者所属(英) |
|
|
|
en |
|
|
Graduate School of Informatics, Kyoto University |
| 著者所属(英) |
|
|
|
en |
|
|
Graduate School of Informatics, Kyoto University |
| 著者名 |
Jaeyoung, Lee
Tatsuya, Kawahara
|
| 著者名(英) |
Jaeyoung, Lee
Tatsuya, Kawahara
|
| 論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
The performance of automatic speech recognition (ASR) for low-resource languages has seen significant improvement, owing to the recent advancements in large-scale pre-training and fine-tuning paradigms. This study investigates optimizing fine-tuning for low-resource languages, utilizing hierarchical intermediate connectionist temporal classification (CTC). This approach employs target units of varying granularity, from subwords to phonemes, across different CTC losses, taking advantage of the hierarchical linguistic structure of natural languages. We apply this technique to the fine-tuning of a large pre-trained model, investigating the conditions under which it is most effective. |
| 論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
The performance of automatic speech recognition (ASR) for low-resource languages has seen significant improvement, owing to the recent advancements in large-scale pre-training and fine-tuning paradigms. This study investigates optimizing fine-tuning for low-resource languages, utilizing hierarchical intermediate connectionist temporal classification (CTC). This approach employs target units of varying granularity, from subwords to phonemes, across different CTC losses, taking advantage of the hierarchical linguistic structure of natural languages. We apply this technique to the fine-tuning of a large pre-trained model, investigating the conditions under which it is most effective. |
| 書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10442647 |
| 書誌情報 |
研究報告音声言語情報処理(SLP)
巻 2024-SLP-151,
号 64,
p. 1-5,
発行日 2024-02-22
|
| ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8663 |
| Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
| 出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |