Item type |
SIG Technical Reports(1) |
公開日 |
2022-10-28 |
タイトル |
|
|
タイトル |
Application and Evaluation of Language Model Based Methods for Test Item Similarity Calculation |
タイトル |
|
|
言語 |
en |
|
タイトル |
Application and Evaluation of Language Model Based Methods for Test Item Similarity Calculation |
言語 |
|
|
言語 |
eng |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
Classi Corp. |
著者所属 |
|
|
|
Classi Corp. |
著者所属 |
|
|
|
Classi Corp. |
著者所属 |
|
|
|
The Universiry of Electro-Communications |
著者所属(英) |
|
|
|
en |
|
|
Classi Corp. |
著者所属(英) |
|
|
|
en |
|
|
Classi Corp. |
著者所属(英) |
|
|
|
en |
|
|
Classi Corp. |
著者所属(英) |
|
|
|
en |
|
|
The Universiry of Electro-Communications |
著者名 |
Tianqi, Wang
Teruhiko, Takagi
Tetsuro, Ito
Masanori, Takagi
|
著者名(英) |
Tianqi, Wang
Teruhiko, Takagi
Tetsuro, Ito
Masanori, Takagi
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
In education, it is crucial to characterize student learning processes to determine learners' efficiency and success in acquiring new knowledge. Creating a test consisting of different items for a specific target knowledge is necessary to assess and quantify knowledge gain, but doing so can burden educators. In previous research, we proposed a method to calculate similarity from extracted target content to retrieve similar test items from a dataset, thereby helping educators create tests for target knowledge. However, that method ignores semantic features that may be an important clue for similarity. Meanwhile, large-scale language models are now well developed and have recently become proficient in many natural language processing tasks. The performance of large-scale language models in test item similarity tasks remains unexplored. In this paper, we build on previous research to explore the performance of methods based on pretrained language models to calculate the similarity between test items. We apply the methods to the Japanese history question dataset. Experimental results show that pretrained language models help capture semantic similarity between words but do not improve the overall performance as expected. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
In education, it is crucial to characterize student learning processes to determine learners' efficiency and success in acquiring new knowledge. Creating a test consisting of different items for a specific target knowledge is necessary to assess and quantify knowledge gain, but doing so can burden educators. In previous research, we proposed a method to calculate similarity from extracted target content to retrieve similar test items from a dataset, thereby helping educators create tests for target knowledge. However, that method ignores semantic features that may be an important clue for similarity. Meanwhile, large-scale language models are now well developed and have recently become proficient in many natural language processing tasks. The performance of large-scale language models in test item similarity tasks remains unexplored. In this paper, we build on previous research to explore the performance of methods based on pretrained language models to calculate the similarity between test items. We apply the methods to the Japanese history question dataset. Experimental results show that pretrained language models help capture semantic similarity between words but do not improve the overall performance as expected. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA12496725 |
書誌情報 |
研究報告教育学習支援情報システム(CLE)
巻 2022-CLE-38,
号 9,
p. 1-7,
発行日 2022-10-28
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8620 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |