WEKO3
アイテム
Partial and Synchronized Caption Generation to Enhance the Listening Comprehension Skills of Second Language Learners
https://ipsj.ixsq.nii.ac.jp/records/101204
https://ipsj.ixsq.nii.ac.jp/records/101204081a5ffd-c8a6-4394-a252-38edd404fe2e
名前 / ファイル | ライセンス | アクション |
---|---|---|
![]() |
Copyright (c) 2014 by the Information Processing Society of Japan
|
|
オープンアクセス |
Item type | SIG Technical Reports(1) | |||||||
---|---|---|---|---|---|---|---|---|
公開日 | 2014-05-15 | |||||||
タイトル | ||||||||
タイトル | Partial and Synchronized Caption Generation to Enhance the Listening Comprehension Skills of Second Language Learners | |||||||
タイトル | ||||||||
言語 | en | |||||||
タイトル | Partial and Synchronized Caption Generation to Enhance the Listening Comprehension Skills of Second Language Learners | |||||||
言語 | ||||||||
言語 | eng | |||||||
キーワード | ||||||||
主題Scheme | Other | |||||||
主題 | 音声言語処理 | |||||||
資源タイプ | ||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_18gh | |||||||
資源タイプ | technical report | |||||||
著者所属 | ||||||||
Graduate School of Informatics, Kyoto University | ||||||||
著者所属 | ||||||||
Graduate School of Informatics, Kyoto University | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Graduate School of Informatics, Kyoto University | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Graduate School of Informatics, Kyoto University | ||||||||
著者名 |
Maryam Sadat , MIRZAEI
× Maryam Sadat , MIRZAEI
|
|||||||
著者名(英) |
Maryam Sadat , MIRZAEI
× Maryam Sadat , MIRZAEI
|
|||||||
論文抄録 | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | Captioning is widely used by second language learners as an assistive tool for listening. However, the use of captions often leads to word-by-word decoding and over-reliance on reading skill rather than improving listening skill. With the purpose of encouraging the learners to listen to the audio instead of merely reading the text, the study introduces a novel technique of captioning, partial and synchronized, as an alternative listening tool for language learners. Using TED talks as a medium for training listening skill, the system employs the ASR technology to synchronize the text to the speech. Then, the system uses the learner's proficiency level to generate partial captions based on three features that impair comprehension: speech rate, word frequency and specificity. To evaluate the system, the performance of Kyoto University students in two CALL classes was assessed by a listening comprehension test on TED talks under three conditions: no caption, full caption and the partial-and-synchronized caption. Results revealed that while reducing the textual density of captions to less than 30%, the proposed method realizes comprehension performance as well as full caption condition. Moreover, it performs better than other conditions for a new segment of the same video without any captions. | |||||||
論文抄録(英) | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | Captioning is widely used by second language learners as an assistive tool for listening. However, the use of captions often leads to word-by-word decoding and over-reliance on reading skill rather than improving listening skill. With the purpose of encouraging the learners to listen to the audio instead of merely reading the text, the study introduces a novel technique of captioning, partial and synchronized, as an alternative listening tool for language learners. Using TED talks as a medium for training listening skill, the system employs the ASR technology to synchronize the text to the speech. Then, the system uses the learner's proficiency level to generate partial captions based on three features that impair comprehension: speech rate, word frequency and specificity. To evaluate the system, the performance of Kyoto University students in two CALL classes was assessed by a listening comprehension test on TED talks under three conditions: no caption, full caption and the partial-and-synchronized caption. Results revealed that while reducing the textual density of captions to less than 30%, the proposed method realizes comprehension performance as well as full caption condition. Moreover, it performs better than other conditions for a new segment of the same video without any captions. | |||||||
書誌レコードID | ||||||||
収録物識別子タイプ | NCID | |||||||
収録物識別子 | AN10115061 | |||||||
書誌情報 |
研究報告自然言語処理(NL) 巻 2014-NL-216, 号 15, p. 1-8, 発行日 2014-05-15 |
|||||||
Notice | ||||||||
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. | ||||||||
出版者 | ||||||||
言語 | ja | |||||||
出版者 | 情報処理学会 |