ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. 音声言語情報処理(SLP)
  3. 2021
  4. 2021-SLP-139

Effective Integration of Transformer for Network-based Speech Emotion Recognition

https://ipsj.ixsq.nii.ac.jp/records/214101
https://ipsj.ixsq.nii.ac.jp/records/214101
95b02348-85a2-444f-8515-8472919dd1fa
名前 / ファイル ライセンス アクション
IPSJ-SLP21139007.pdf IPSJ-SLP21139007.pdf (1.2 MB)
Copyright (c) 2021 by the Information Processing Society of Japan
オープンアクセス
Item type SIG Technical Reports(1)
公開日 2021-11-24
タイトル
タイトル Effective Integration of Transformer for Network-based Speech Emotion Recognition
タイトル
言語 en
タイトル Effective Integration of Transformer for Network-based Speech Emotion Recognition
言語
言語 eng
キーワード
主題Scheme Other
主題 話者・感情認識
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
The University of Tokyo
著者所属
The University of Tokyo
著者所属
The University of Tokyo
著者所属(英)
en
The University of Tokyo
著者所属(英)
en
The University of Tokyo
著者所属(英)
en
The University of Tokyo
著者名 Yurun, He

× Yurun, He

Yurun, He

Search repository
Nobuaki, Minematsu

× Nobuaki, Minematsu

Nobuaki, Minematsu

Search repository
Daisuke, Saito

× Daisuke, Saito

Daisuke, Saito

Search repository
著者名(英) Yurun, He

× Yurun, He

en Yurun, He

Search repository
Nobuaki, Minematsu

× Nobuaki, Minematsu

en Nobuaki, Minematsu

Search repository
Daisuke, Saito

× Daisuke, Saito

en Daisuke, Saito

Search repository
論文抄録
内容記述タイプ Other
内容記述 The performance of a speech emotion recognition (SER) system heavily relies on deep representations learned from training samples. Recently, transformer has exhibited outstanding properties in learning relevant representations for this task. However, to better fuse it with conventional models, experimental investigations are still needed. In this paper, we attempt to take advantage of several integrations of transformer with two most widely used deep learning models - CNN and BLSTM. Experiments on the IEMOCAP benchmark dataset demonstrate that the proposed approaches can make a promising improvement.
論文抄録(英)
内容記述タイプ Other
内容記述 The performance of a speech emotion recognition (SER) system heavily relies on deep representations learned from training samples. Recently, transformer has exhibited outstanding properties in learning relevant representations for this task. However, to better fuse it with conventional models, experimental investigations are still needed. In this paper, we attempt to take advantage of several integrations of transformer with two most widely used deep learning models - CNN and BLSTM. Experiments on the IEMOCAP benchmark dataset demonstrate that the proposed approaches can make a promising improvement.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN10442647
書誌情報 研究報告音声言語情報処理(SLP)

巻 2021-SLP-139, 号 7, p. 1-6, 発行日 2021-11-24
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8663
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 16:54:03.783003
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3