| Item type |
SIG Technical Reports(1) |
| 公開日 |
2023-02-21 |
| タイトル |
|
|
タイトル |
What Do Self-Supervised Speech Representation Models Know?-A Layer-Wise Analysis- |
| タイトル |
|
|
言語 |
en |
|
タイトル |
What Do Self-Supervised Speech Representation Models Know?-A Layer-Wise Analysis- |
| 言語 |
|
|
言語 |
eng |
| キーワード |
|
|
主題Scheme |
Other |
|
主題 |
招待講演3 |
| 資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
| 著者所属 |
|
|
|
Toyota Technological Institute at Chicago |
| 著者所属 |
|
|
|
Toyota Technological Institute at Chicago |
| 著者所属 |
|
|
|
Toyota Technological Institute at Chicago |
| 著者所属 |
|
|
|
Toyota Technological Institute at Chicago |
| 著者所属(英) |
|
|
|
en |
|
|
Toyota Technological Institute at Chicago |
| 著者所属(英) |
|
|
|
en |
|
|
Toyota Technological Institute at Chicago |
| 著者所属(英) |
|
|
|
en |
|
|
Toyota Technological Institute at Chicago |
| 著者所属(英) |
|
|
|
en |
|
|
Toyota Technological Institute at Chicago |
| 著者名 |
Karen, Livescu
Ankita, Pasad
Ju-Chieh, Chou
Bowen, Shi
|
| 著者名(英) |
Karen, Livescu
Ankita, Pasad
Ju-Chieh, Chou
Bowen, Shi
|
| 論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Self-supervised speech representations have become ubiquitous in speech processing over the past few years. They have both improved the state of the art and made it feasible to learn speech models with very little labeled data. However, it is not well understood what linguistic information is encoded in pre-trained models and how best to apply them to downstream tasks. In this talk I will describe recent work that begins to build an understanding of the layer-wise information learned by pre-trained speech models. We consider a number of popular pre-trained models and investigate the extent to which their layers encode spectral, phonetic, and word-level information. The results of these analyses also suggest some ways to improve or simplify the application of pre-trained models for downstream tasks. |
| 論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Self-supervised speech representations have become ubiquitous in speech processing over the past few years. They have both improved the state of the art and made it feasible to learn speech models with very little labeled data. However, it is not well understood what linguistic information is encoded in pre-trained models and how best to apply them to downstream tasks. In this talk I will describe recent work that begins to build an understanding of the layer-wise information learned by pre-trained speech models. We consider a number of popular pre-trained models and investigate the extent to which their layers encode spectral, phonetic, and word-level information. The results of these analyses also suggest some ways to improve or simplify the application of pre-trained models for downstream tasks. |
| 書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10442647 |
| 書誌情報 |
研究報告音声言語情報処理(SLP)
巻 2023-SLP-146,
号 58,
p. 1-1,
発行日 2023-02-21
|
| ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8663 |
| Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
| 出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |