ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. 教育学習支援情報システム(CLE)
  3. 2022
  4. 2022-CLE-38

Application and Evaluation of Language Model Based Methods for Test Item Similarity Calculation

https://ipsj.ixsq.nii.ac.jp/records/222039
https://ipsj.ixsq.nii.ac.jp/records/222039
6dc721e7-46ad-4be0-8ecf-2a2cc6e679f8
名前 / ファイル ライセンス アクション
IPSJ-CLE22038009.pdf IPSJ-CLE22038009.pdf (856.2 kB)
Copyright (c) 2022 by the Information Processing Society of Japan
オープンアクセス
Item type SIG Technical Reports(1)
公開日 2022-10-28
タイトル
タイトル Application and Evaluation of Language Model Based Methods for Test Item Similarity Calculation
タイトル
言語 en
タイトル Application and Evaluation of Language Model Based Methods for Test Item Similarity Calculation
言語
言語 eng
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Classi Corp.
著者所属
Classi Corp.
著者所属
Classi Corp.
著者所属
The Universiry of Electro-Communications
著者所属(英)
en
Classi Corp.
著者所属(英)
en
Classi Corp.
著者所属(英)
en
Classi Corp.
著者所属(英)
en
The Universiry of Electro-Communications
著者名 Tianqi, Wang

× Tianqi, Wang

Tianqi, Wang

Search repository
Teruhiko, Takagi

× Teruhiko, Takagi

Teruhiko, Takagi

Search repository
Tetsuro, Ito

× Tetsuro, Ito

Tetsuro, Ito

Search repository
Masanori, Takagi

× Masanori, Takagi

Masanori, Takagi

Search repository
著者名(英) Tianqi, Wang

× Tianqi, Wang

en Tianqi, Wang

Search repository
Teruhiko, Takagi

× Teruhiko, Takagi

en Teruhiko, Takagi

Search repository
Tetsuro, Ito

× Tetsuro, Ito

en Tetsuro, Ito

Search repository
Masanori, Takagi

× Masanori, Takagi

en Masanori, Takagi

Search repository
論文抄録
内容記述タイプ Other
内容記述 In education, it is crucial to characterize student learning processes to determine learners' efficiency and success in acquiring new knowledge. Creating a test consisting of different items for a specific target knowledge is necessary to assess and quantify knowledge gain, but doing so can burden educators. In previous research, we proposed a method to calculate similarity from extracted target content to retrieve similar test items from a dataset, thereby helping educators create tests for target knowledge. However, that method ignores semantic features that may be an important clue for similarity. Meanwhile, large-scale language models are now well developed and have recently become proficient in many natural language processing tasks. The performance of large-scale language models in test item similarity tasks remains unexplored. In this paper, we build on previous research to explore the performance of methods based on pretrained language models to calculate the similarity between test items. We apply the methods to the Japanese history question dataset. Experimental results show that pretrained language models help capture semantic similarity between words but do not improve the overall performance as expected.
論文抄録(英)
内容記述タイプ Other
内容記述 In education, it is crucial to characterize student learning processes to determine learners' efficiency and success in acquiring new knowledge. Creating a test consisting of different items for a specific target knowledge is necessary to assess and quantify knowledge gain, but doing so can burden educators. In previous research, we proposed a method to calculate similarity from extracted target content to retrieve similar test items from a dataset, thereby helping educators create tests for target knowledge. However, that method ignores semantic features that may be an important clue for similarity. Meanwhile, large-scale language models are now well developed and have recently become proficient in many natural language processing tasks. The performance of large-scale language models in test item similarity tasks remains unexplored. In this paper, we build on previous research to explore the performance of methods based on pretrained language models to calculate the similarity between test items. We apply the methods to the Japanese history question dataset. Experimental results show that pretrained language models help capture semantic similarity between words but do not improve the overall performance as expected.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AA12496725
書誌情報 研究報告教育学習支援情報システム(CLE)

巻 2022-CLE-38, 号 9, p. 1-7, 発行日 2022-10-28
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8620
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 13:54:16.547784
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3