ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. シンポジウム
  2. シンポジウムシリーズ
  3. Asia Pacific Conference on Robot IoT System Development and Platform (APRIS)
  4. 2022

A prototype of multi-modal interaction robot based on emotion estimation method using physiological signals

https://ipsj.ixsq.nii.ac.jp/records/222944
https://ipsj.ixsq.nii.ac.jp/records/222944
fe633734-73c9-492f-a0a4-cb8a8a9563d5
名前 / ファイル ライセンス アクション
IPSJ-APRIS2022002.pdf IPSJ-APRIS2022002.pdf (1.0 MB)
Copyright (c) 2022 by the Information Processing Society of Japan
オープンアクセス
Item type Symposium(1)
公開日 2022-12-20
タイトル
タイトル A prototype of multi-modal interaction robot based on emotion estimation method using physiological signals
タイトル
言語 en
タイトル A prototype of multi-modal interaction robot based on emotion estimation method using physiological signals
言語
言語 eng
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_5794
資源タイプ conference paper
著者所属
Shibaura Institute of Technology
著者所属
Shibaura Institute of Technology
著者所属
Shibaura Institute of Technology
著者所属
Shibaura Institute of Technology
著者所属(英)
en
Shibaura Institute of Technology
著者所属(英)
en
Shibaura Institute of Technology
著者所属(英)
en
Shibaura Institute of Technology
著者所属(英)
en
Shibaura Institute of Technology
著者名 Kaoru, Suzuki

× Kaoru, Suzuki

Kaoru, Suzuki

Search repository
Takumi, Iguchi

× Takumi, Iguchi

Takumi, Iguchi

Search repository
Yuri, Nakagawa

× Yuri, Nakagawa

Yuri, Nakagawa

Search repository
Midori, Sugaya

× Midori, Sugaya

Midori, Sugaya

Search repository
著者名(英) Kaoru, Suzuki

× Kaoru, Suzuki

en Kaoru, Suzuki

Search repository
Takumi, Iguchi

× Takumi, Iguchi

en Takumi, Iguchi

Search repository
Yuri, Nakagawa

× Yuri, Nakagawa

en Yuri, Nakagawa

Search repository
Midori, Sugaya

× Midori, Sugaya

en Midori, Sugaya

Search repository
論文抄録
内容記述タイプ Other
内容記述 In recent years, the type of robot that estimates emotions in real time has been proposed and is expected to be introduced into nursing care and home use. Among emotion estimation technologies, those incorporating methods using the physiological signals have satisfied the requirements of obtaining real-time emotional response of the human by using physiological signals such as EEG and HRV from the wearable sensors. However, the current real-time emotion estimation response robot only has a speech function or a facial expression function accordingly to emotional states of the human that is in front of the robot and does not use multiple modalities. With those limited modalities of output, we assume that it would be difficult to improve the emotional state of the person that uses the robot. And we would like to know which modalities are effective and what happens when modalities are combined. Therefore, in this research, we propose a multi-modal robot that combines not only speech but also facial expressions and body movements aiming the practical application of a robot that responds to the person's emotion in real time. This robot outputs facial expressions/speeches/body movements to improve or maintain the user's emotional state. As a result of the experiment, it is confirmed that this robot has a relaxing effect on the user when output includes speech.
論文抄録(英)
内容記述タイプ Other
内容記述 In recent years, the type of robot that estimates emotions in real time has been proposed and is expected to be introduced into nursing care and home use. Among emotion estimation technologies, those incorporating methods using the physiological signals have satisfied the requirements of obtaining real-time emotional response of the human by using physiological signals such as EEG and HRV from the wearable sensors. However, the current real-time emotion estimation response robot only has a speech function or a facial expression function accordingly to emotional states of the human that is in front of the robot and does not use multiple modalities. With those limited modalities of output, we assume that it would be difficult to improve the emotional state of the person that uses the robot. And we would like to know which modalities are effective and what happens when modalities are combined. Therefore, in this research, we propose a multi-modal robot that combines not only speech but also facial expressions and body movements aiming the practical application of a robot that responds to the person's emotion in real time. This robot outputs facial expressions/speeches/body movements to improve or maintain the user's emotional state. As a result of the experiment, it is confirmed that this robot has a relaxing effect on the user when output includes speech.
書誌情報 Proceedings of Asia Pacific Conference on Robot IoT System Development and Platform

巻 2022, p. 7-12, 発行日 2022-12-20
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 13:33:38.886516
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3