Item type |
SIG Technical Reports(1) |
公開日 |
2023-11-25 |
タイトル |
|
|
タイトル |
Improvement of Tacotron2 text-to-speech model based on masking operation and positional attention mechanism |
タイトル |
|
|
言語 |
en |
|
タイトル |
Improvement of Tacotron2 text-to-speech model based on masking operation and positional attention mechanism |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
分野横断(1) |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
Graduate School of Engineering, The University of Tokyo |
著者所属 |
|
|
|
Graduate School of Engineering, The University of Tokyo |
著者所属 |
|
|
|
Graduate School of Engineering, The University of Tokyo |
著者所属(英) |
|
|
|
en |
|
|
Graduate School of Engineering, The University of Tokyo |
著者所属(英) |
|
|
|
en |
|
|
Graduate School of Engineering, The University of Tokyo |
著者所属(英) |
|
|
|
en |
|
|
Graduate School of Engineering, The University of Tokyo |
著者名 |
Tong, Ma
Daisuke, Saito
Nobuaki, Minematsu
|
著者名(英) |
Tong, Ma
Daisuke, Saito
Nobuaki, Minematsu
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Inspired by masking operations on Self-supervised Speech Representation (SSL) models, masking operations were imported to the improvement of text-to-speech synthesis models. In experiments with traditional multi-stage text-to-speech synthesis models, it was found that frame-masking operations on the inputs can improve the performance of the models. However, in an end-to-end model like Tacotron2 [1], hiding state vector information is very complex and it is difficult to achieve accurate masking. To achieve accurate masking operations in an end-to-end model like Tacotron2, this paper introduces a position-based attention mechanism that accurately captures the contextual information of each character and performs precise deletions to achieve effective masking. Through empirical studies, it is demonstrated that judicious masking operations can improve the performance of the Tacotron2 model, while excessive masking operations lead to a significant degradation of the model performance. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Inspired by masking operations on Self-supervised Speech Representation (SSL) models, masking operations were imported to the improvement of text-to-speech synthesis models. In experiments with traditional multi-stage text-to-speech synthesis models, it was found that frame-masking operations on the inputs can improve the performance of the models. However, in an end-to-end model like Tacotron2 [1], hiding state vector information is very complex and it is difficult to achieve accurate masking. To achieve accurate masking operations in an end-to-end model like Tacotron2, this paper introduces a position-based attention mechanism that accurately captures the contextual information of each character and performs precise deletions to achieve effective masking. Through empirical studies, it is demonstrated that judicious masking operations can improve the performance of the Tacotron2 model, while excessive masking operations lead to a significant degradation of the model performance. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10442647 |
書誌情報 |
研究報告音声言語情報処理(SLP)
巻 2023-SLP-149,
号 11,
p. 1-6,
発行日 2023-11-25
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8663 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |