WEKO3
アイテム
Modeling Spatiotemporal Correlations between Video Saliency and Gaze Dynamics
https://ipsj.ixsq.nii.ac.jp/records/100973
https://ipsj.ixsq.nii.ac.jp/records/100973298db085-2d73-42cd-a050-365f1c3d4311
名前 / ファイル | ライセンス | アクション |
---|---|---|
![]() |
Copyright (c) 2014 by the Information Processing Society of Japan
|
|
オープンアクセス |
Item type | SIG Technical Reports(1) | |||||||
---|---|---|---|---|---|---|---|---|
公開日 | 2014-05-08 | |||||||
タイトル | ||||||||
タイトル | Modeling Spatiotemporal Correlations between Video Saliency and Gaze Dynamics | |||||||
タイトル | ||||||||
言語 | en | |||||||
タイトル | Modeling Spatiotemporal Correlations between Video Saliency and Gaze Dynamics | |||||||
言語 | ||||||||
言語 | eng | |||||||
資源タイプ | ||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_18gh | |||||||
資源タイプ | technical report | |||||||
著者所属 | ||||||||
Presently with The University of Tokyo/Kyoto University | ||||||||
著者所属 | ||||||||
Kyoto University | ||||||||
著者所属 | ||||||||
Kyoto University | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Presently with The University of Tokyo / Kyoto University | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Kyoto University | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Kyoto University | ||||||||
著者名 |
Ryo, Yonetani
× Ryo, Yonetani
|
|||||||
著者名(英) |
Ryo, Yonetani
× Ryo, Yonetani
|
|||||||
論文抄録 | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | In this study, we propose a framework to describe the relationship named spatiotemporal correlation between video contents and human gaze dynamics. The spatiotemporal correlation consists of (1) the event-level spatiotemporal gaps between visual events in videos and gaze reactions and (2) the scene-level correlations between video scene structures and corresponding gaze dynamics. Our framework can describe this twofold relationship simply and efficiently by discovering and combining primitive spatiotemporal patterns of visually salient regions in videos and those of gaze. The effectiveness of this framework is confirmed via several practical tasks of gaze behavior analyses in real environments, attentional target identification, attentive state estimation and gaze point prediction. | |||||||
論文抄録(英) | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | In this study, we propose a framework to describe the relationship named spatiotemporal correlation between video contents and human gaze dynamics. The spatiotemporal correlation consists of (1) the event-level spatiotemporal gaps between visual events in videos and gaze reactions and (2) the scene-level correlations between video scene structures and corresponding gaze dynamics. Our framework can describe this twofold relationship simply and efficiently by discovering and combining primitive spatiotemporal patterns of visually salient regions in videos and those of gaze. The effectiveness of this framework is confirmed via several practical tasks of gaze behavior analyses in real environments, attentional target identification, attentive state estimation and gaze point prediction. | |||||||
書誌レコードID | ||||||||
収録物識別子タイプ | NCID | |||||||
収録物識別子 | AA11131797 | |||||||
書誌情報 |
研究報告コンピュータビジョンとイメージメディア(CVIM) 巻 2014-CVIM-192, 号 32, p. 1-16, 発行日 2014-05-08 |
|||||||
Notice | ||||||||
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. | ||||||||
出版者 | ||||||||
言語 | ja | |||||||
出版者 | 情報処理学会 |