ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. 音声言語情報処理(SLP)
  3. 2024
  4. 2024-SLP-154

Comparing Likert Scale and Pairwise Comparison for Human Evaluation in Rapport-Building Dialogue Systems

https://ipsj.ixsq.nii.ac.jp/records/241663
https://ipsj.ixsq.nii.ac.jp/records/241663
0b43b4e7-dd3e-4db1-936e-eea6e47123ea
名前 / ファイル ライセンス アクション
IPSJ-SLP24154043.pdf IPSJ-SLP24154043.pdf (2.2 MB)
 2026年12月5日からダウンロード可能です。
Copyright (c) 2024 by the Information Processing Society of Japan
非会員:¥660, IPSJ:学会員:¥330, SLP:会員:¥0, DLIB:会員:¥0
Item type SIG Technical Reports(1)
公開日 2024-12-05
タイトル
タイトル Comparing Likert Scale and Pairwise Comparison for Human Evaluation in Rapport-Building Dialogue Systems
タイトル
言語 en
タイトル Comparing Likert Scale and Pairwise Comparison for Human Evaluation in Rapport-Building Dialogue Systems
言語
言語 eng
キーワード
主題Scheme Other
主題 話者認識・音声分析
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Nara Institute of Science and Technology/Guardian Robot Project RIKEN
著者所属
Guardian Robot Project RIKEN
著者所属
Guardian Robot Project RIKEN/Nara Institute of Science and Technology
著者所属
Institute of Science Tokyo/Guardian Robot Project RIKEN/Nara Institute of Science and Technology
著者所属(英)
en
Nara Institute of Science and Technology / Guardian Robot Project RIKEN
著者所属(英)
en
Guardian Robot Project RIKEN
著者所属(英)
en
Guardian Robot Project RIKEN / Nara Institute of Science and Technology
著者所属(英)
en
Institute of Science Tokyo / Guardian Robot Project RIKEN / Nara Institute of Science and Technology
著者名 Muhammad, Yeza Baihaqi

× Muhammad, Yeza Baihaqi

Muhammad, Yeza Baihaqi

Search repository
Angel, García Contreras

× Angel, García Contreras

Angel, García Contreras

Search repository
Seiya, Kawano

× Seiya, Kawano

Seiya, Kawano

Search repository
Koichiro, Yoshino

× Koichiro, Yoshino

Koichiro, Yoshino

Search repository
著者名(英) Muhammad, Yeza Baihaqi

× Muhammad, Yeza Baihaqi

en Muhammad, Yeza Baihaqi

Search repository
Angel, García Contreras

× Angel, García Contreras

en Angel, García Contreras

Search repository
Seiya, Kawano

× Seiya, Kawano

en Seiya, Kawano

Search repository
Koichiro, Yoshino

× Koichiro, Yoshino

en Koichiro, Yoshino

Search repository
論文抄録
内容記述タイプ Other
内容記述 Human evaluation plays a critical role in dialogue systems research, especially in non-task-oriented systems such as rapport-building dialogue systems. Current evaluations often rely on Likert scales to assess user experience, but this method introduces challenges such as inconsistent scale perception, inefficiency, and central tendency bias. Moreover, it is difficult to compare the agent's performance across multiple criteria due to the problem of uneven scoring interpretations by participants on the Likert scale. On the other hand, pairwise comparison emphasizes direct item-to-item evaluation based on defined criteria, producing scores that more closely align with participants' preferences and minimizing biases. This paper compares an evaluation framework for rapport-building dialogue systems using pairwise comparison with a conventional Likert scale system. These approaches are tested through dialogue experiments involving six participants and four dialogue systems embedded in a conversational robot: CommA, CommI, CommO, and CommE, to measure human-agent rapport. Our experimental results indicated that the pairwise comparison method better represented systems' overall performance compared to the Likert scale. It also demonstrated lower variability, higher reliability, and a shorter completion time.
論文抄録(英)
内容記述タイプ Other
内容記述 Human evaluation plays a critical role in dialogue systems research, especially in non-task-oriented systems such as rapport-building dialogue systems. Current evaluations often rely on Likert scales to assess user experience, but this method introduces challenges such as inconsistent scale perception, inefficiency, and central tendency bias. Moreover, it is difficult to compare the agent's performance across multiple criteria due to the problem of uneven scoring interpretations by participants on the Likert scale. On the other hand, pairwise comparison emphasizes direct item-to-item evaluation based on defined criteria, producing scores that more closely align with participants' preferences and minimizing biases. This paper compares an evaluation framework for rapport-building dialogue systems using pairwise comparison with a conventional Likert scale system. These approaches are tested through dialogue experiments involving six participants and four dialogue systems embedded in a conversational robot: CommA, CommI, CommO, and CommE, to measure human-agent rapport. Our experimental results indicated that the pairwise comparison method better represented systems' overall performance compared to the Likert scale. It also demonstrated lower variability, higher reliability, and a shorter completion time.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN10442647
書誌情報 研究報告音声言語情報処理(SLP)

巻 2024-SLP-154, 号 43, p. 1-5, 発行日 2024-12-05
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8663
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 07:35:14.474072
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3