ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. 音声言語情報処理(SLP)
  3. 2022
  4. 2022-SLP-140

Target Speaker Extraction based on Conditional Variational Autoencoder and Directional Information in Underdetermined Condition

https://ipsj.ixsq.nii.ac.jp/records/216614
https://ipsj.ixsq.nii.ac.jp/records/216614
7d37c820-5df9-4a28-8ea0-3fc47a3aac58
名前 / ファイル ライセンス アクション
IPSJ-SLP22140013.pdf IPSJ-SLP22140013.pdf (1.8 MB)
Copyright (c) 2022 by the Institute of Electronics, Information and Communication Engineers This SIG report is only available to those in membership of the SIG.
SLP:会員:¥0, DLIB:会員:¥0
Item type SIG Technical Reports(1)
公開日 2022-02-22
タイトル
タイトル Target Speaker Extraction based on Conditional Variational Autoencoder and Directional Information in Underdetermined Condition
タイトル
言語 en
タイトル Target Speaker Extraction based on Conditional Variational Autoencoder and Directional Information in Underdetermined Condition
言語
言語 eng
キーワード
主題Scheme Other
主題 EA
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Graduate school of Informatics, Nagoya University
著者所属
Information Technology Center, Nagoya University
著者所属
Information Technology Center, Nagoya University
著者所属(英)
en
Graduate school of Informatics, Nagoya University
著者所属(英)
en
Information Technology Center, Nagoya University
著者所属(英)
en
Information Technology Center, Nagoya University
著者名 Rui, Wang

× Rui, Wang

Rui, Wang

Search repository
Li, Li

× Li, Li

Li, Li

Search repository
Tomoki, Toda

× Tomoki, Toda

Tomoki, Toda

Search repository
著者名(英) Rui, Wang

× Rui, Wang

en Rui, Wang

Search repository
Li, Li

× Li, Li

en Li, Li

Search repository
Tomoki, Toda

× Tomoki, Toda

en Tomoki, Toda

Search repository
論文抄録
内容記述タイプ Other
内容記述 This paper deals with a dual-channel target speaker extraction problem in underdetermined conditions. A blind source separation framework based on the demixing matrix estimation with deep source models has achieved reasonably high separation performance in determined conditions, but its performance is still limited in underdetermined conditions. For the dual-channel target speaker extraction, it is expected that the additional directional information is a useful cue, and the choice of the source model is crucial to the performance. In this report, we propose a target speaker extraction method by combining geometrical constraint-based target selection capability, more powerful source modeling, and nonlinear postprocessing. In the demixing matrix estimation, the target directional information is used as a soft constraint, and two conditional variational autoencoders are used to model a single speaker’s speech and interference mixture speech, respectively. As the postprocessing, a time-frequency mask estimated from the separated interference mixture speech is used to extract the target speaker’s speech. Experimental results have demonstrated that the proposed method outperforms baseline methods.
論文抄録(英)
内容記述タイプ Other
内容記述 This paper deals with a dual-channel target speaker extraction problem in underdetermined conditions. A blind source separation framework based on the demixing matrix estimation with deep source models has achieved reasonably high separation performance in determined conditions, but its performance is still limited in underdetermined conditions. For the dual-channel target speaker extraction, it is expected that the additional directional information is a useful cue, and the choice of the source model is crucial to the performance. In this report, we propose a target speaker extraction method by combining geometrical constraint-based target selection capability, more powerful source modeling, and nonlinear postprocessing. In the demixing matrix estimation, the target directional information is used as a soft constraint, and two conditional variational autoencoders are used to model a single speaker’s speech and interference mixture speech, respectively. As the postprocessing, a time-frequency mask estimated from the separated interference mixture speech is used to extract the target speaker’s speech. Experimental results have demonstrated that the proposed method outperforms baseline methods.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN10442647
書誌情報 研究報告音声言語情報処理(SLP)

巻 2022-SLP-140, 号 13, p. 1-6, 発行日 2022-02-22
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8663
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 15:47:36.240041
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3