ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. 知能システム(ICS)
  3. 2022
  4. 2022-ICS-205

Automatic Short Answer Grading with Rubric-based Semantic Embedding Optimization

https://ipsj.ixsq.nii.ac.jp/records/216464
https://ipsj.ixsq.nii.ac.jp/records/216464
5fe2b159-deeb-40f8-b7e5-a705608b9f2a
名前 / ファイル ライセンス アクション
IPSJ-ICS22205011.pdf IPSJ-ICS22205011.pdf (2.1 MB)
Copyright (c) 2022 by the Information Processing Society of Japan
オープンアクセス
Item type SIG Technical Reports(1)
公開日 2022-02-14
タイトル
タイトル Automatic Short Answer Grading with Rubric-based Semantic Embedding Optimization
タイトル
言語 en
タイトル Automatic Short Answer Grading with Rubric-based Semantic Embedding Optimization
言語
言語 eng
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Kyushu Universtiy
著者所属
The National Center for University Entrance Examinations
著者所属
Kyushu Universtiy
著者所属(英)
en
Kyushu Universtiy
著者所属(英)
en
The National Center for University Entrance Examinations
著者所属(英)
en
Kyushu Universtiy
著者名 Bo, Wang

× Bo, Wang

Bo, Wang

Search repository
Tsunenori, Ishioka

× Tsunenori, Ishioka

Tsunenori, Ishioka

Search repository
Tsunenori, Mine

× Tsunenori, Mine

Tsunenori, Mine

Search repository
著者名(英) Bo, Wang

× Bo, Wang

en Bo, Wang

Search repository
Tsunenori, Ishioka

× Tsunenori, Ishioka

en Tsunenori, Ishioka

Search repository
Tsunenori, Mine

× Tsunenori, Mine

en Tsunenori, Mine

Search repository
論文抄録
内容記述タイプ Other
内容記述 Large-scaled encoders such as BERT have been actively used for sentence embedding in automatic scoring. However, the embedding may not be optimal due to non-uniform vector distribution. By conducting fast contrastive learning, methods like SBERT got better semantic embeddings and were actively used in textual similarity datasets. However, the cost to obtain the similarities limits its application to automatic grading. In this paper, we propose a method of calculating similarity from the rubric to perform contrastive learning for a better semantic embedding. We conducted extensive experiments on 60,000 answer/question data for three independent questions. The experimental results show that the proposed method outperforms all baselines in terms of accuracy and computation time.
論文抄録(英)
内容記述タイプ Other
内容記述 Large-scaled encoders such as BERT have been actively used for sentence embedding in automatic scoring. However, the embedding may not be optimal due to non-uniform vector distribution. By conducting fast contrastive learning, methods like SBERT got better semantic embeddings and were actively used in textual similarity datasets. However, the cost to obtain the similarities limits its application to automatic grading. In this paper, we propose a method of calculating similarity from the rubric to perform contrastive learning for a better semantic embedding. We conducted extensive experiments on 60,000 answer/question data for three independent questions. The experimental results show that the proposed method outperforms all baselines in terms of accuracy and computation time.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AA11135936
書誌情報 研究報告知能システム(ICS)

巻 2022-ICS-205, 号 11, p. 1-7, 発行日 2022-02-14
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-885X
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 15:50:20.510542
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3