Item type |
SIG Technical Reports(1) |
公開日 |
2023-11-25 |
タイトル |
|
|
タイトル |
Self-supervised learning model based emotion transfer and intensity control technology for expressive speech synthesis |
タイトル |
|
|
言語 |
en |
|
タイトル |
Self-supervised learning model based emotion transfer and intensity control technology for expressive speech synthesis |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
ポスター |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
Graduate School of Engineering, The University of Tokyo |
著者所属 |
|
|
|
Graduate School of Engineering, The University of Tokyo |
著者所属 |
|
|
|
Graduate School of Engineering, The University of Tokyo |
著者所属(英) |
|
|
|
en |
|
|
Graduate School of Engineering, The University of Tokyo |
著者所属(英) |
|
|
|
en |
|
|
Graduate School of Engineering, The University of Tokyo |
著者所属(英) |
|
|
|
en |
|
|
Graduate School of Engineering, The University of Tokyo |
著者名 |
Wei, Li
Nobuaki, Minematsu
Daisuke, Saito
|
著者名(英) |
Wei, Li
Nobuaki, Minematsu
Daisuke, Saito
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Emotion transfer techniques, which transfersba the speaking style from the reference speech to the target speech, are widely used for speech synthesis. However, previous methods using emotion classifier to disentangle the emotion components fail to transfer the correct emotion to the target speech in some contexts. To solve this problem, we introduce self-supervised learning model to improve the capability of emotion feature extraction. In addition, we utilize the relative attributes method to obtain the intensity labels for our emotional speech dataset. Experimental results indicate that our method can improve the performance of emotional speech synthesis model. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Emotion transfer techniques, which transfersba the speaking style from the reference speech to the target speech, are widely used for speech synthesis. However, previous methods using emotion classifier to disentangle the emotion components fail to transfer the correct emotion to the target speech in some contexts. To solve this problem, we introduce self-supervised learning model to improve the capability of emotion feature extraction. In addition, we utilize the relative attributes method to obtain the intensity labels for our emotional speech dataset. Experimental results indicate that our method can improve the performance of emotional speech synthesis model. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10115061 |
書誌情報 |
研究報告自然言語処理(NL)
巻 2023-NL-258,
号 16,
p. 1-6,
発行日 2023-11-25
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8779 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |