WEKO3
アイテム
Learning Spatiotemporal Gaps between Where We Look and What We Focus on
https://ipsj.ixsq.nii.ac.jp/records/94702
https://ipsj.ixsq.nii.ac.jp/records/94702c6608dcd-bf96-41a1-b76e-8acb147533bd
名前 / ファイル | ライセンス | アクション |
---|---|---|
![]() |
Copyright (c) 2013 by the Information Processing Society of Japan
|
|
オープンアクセス |
Item type | Trans(1) | |||||||
---|---|---|---|---|---|---|---|---|
公開日 | 2013-07-29 | |||||||
タイトル | ||||||||
タイトル | Learning Spatiotemporal Gaps between Where We Look and What We Focus on | |||||||
タイトル | ||||||||
言語 | en | |||||||
タイトル | Learning Spatiotemporal Gaps between Where We Look and What We Focus on | |||||||
言語 | ||||||||
言語 | eng | |||||||
キーワード | ||||||||
主題Scheme | Other | |||||||
主題 | [Regular Paper - Express Paper] saliency map, eye movement, spatiotemporal structure | |||||||
資源タイプ | ||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_6501 | |||||||
資源タイプ | journal article | |||||||
著者所属 | ||||||||
Graduate School of Informatics, Kyoto University | ||||||||
著者所属 | ||||||||
Graduate School of Informatics, Kyoto University | ||||||||
著者所属 | ||||||||
Graduate School of Informatics, Kyoto University | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Graduate School of Informatics, Kyoto University | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Graduate School of Informatics, Kyoto University | ||||||||
著者所属(英) | ||||||||
en | ||||||||
Graduate School of Informatics, Kyoto University | ||||||||
著者名 |
Ryo, Yonetani
Hiroaki, Kawashima
Takashi, Matsuyama
× Ryo, Yonetani Hiroaki, Kawashima Takashi, Matsuyama
|
|||||||
著者名(英) |
Ryo, Yonetani
Hiroaki, Kawashima
Takashi, Matsuyama
× Ryo, Yonetani Hiroaki, Kawashima Takashi, Matsuyama
|
|||||||
論文抄録 | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | When we are watching videos, there are spatiotemporal gaps between where we look (points of gaze) and what we focus on (points of attentional focus), which result from temporally delayed responses or anticipation in eye movements. We focus on the underlying structure of those gaps and propose a novel learning-based model to predict where humans look in videos. The proposed model selects a relevant point of focus in the spatiotemporal neighborhood around a point of gaze, and jointly learns its salience and spatiotemporal gap with the point of gaze. It tells us “this point is likely to be looked at because there is a point of focus around the point with a reasonable spatiotemporal gap.” Experimental results with a public dataset demonstrate the effectiveness of the model to predict the points of gaze by learning a particular structure of gaps with respect to the types of eye movements and those of salient motions in videos. | |||||||
論文抄録(英) | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | When we are watching videos, there are spatiotemporal gaps between where we look (points of gaze) and what we focus on (points of attentional focus), which result from temporally delayed responses or anticipation in eye movements. We focus on the underlying structure of those gaps and propose a novel learning-based model to predict where humans look in videos. The proposed model selects a relevant point of focus in the spatiotemporal neighborhood around a point of gaze, and jointly learns its salience and spatiotemporal gap with the point of gaze. It tells us “this point is likely to be looked at because there is a point of focus around the point with a reasonable spatiotemporal gap.” Experimental results with a public dataset demonstrate the effectiveness of the model to predict the points of gaze by learning a particular structure of gaps with respect to the types of eye movements and those of salient motions in videos. | |||||||
書誌情報 |
IPSJ Transactions on Computer Vision and Applications (CVA) 巻 5, p. 75-79, 発行日 2013-07-29 |
|||||||
ISSN | ||||||||
収録物識別子タイプ | ISSN | |||||||
収録物識別子 | 1882-6695 | |||||||
出版者 | ||||||||
言語 | ja | |||||||
出版者 | 情報処理学会 |