ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 論文誌(トランザクション)
  2. Computer Vision and Applications(CVA)
  3. Vol.5

Learning Spatiotemporal Gaps between Where We Look and What We Focus on

https://ipsj.ixsq.nii.ac.jp/records/94702
https://ipsj.ixsq.nii.ac.jp/records/94702
c6608dcd-bf96-41a1-b76e-8acb147533bd
名前 / ファイル ライセンス アクション
IPSJ-TCVA0500014.pdf IPSJ-TCVA0500014.pdf (980.7 kB)
Copyright (c) 2013 by the Information Processing Society of Japan
オープンアクセス
Item type Trans(1)
公開日 2013-07-29
タイトル
タイトル Learning Spatiotemporal Gaps between Where We Look and What We Focus on
タイトル
言語 en
タイトル Learning Spatiotemporal Gaps between Where We Look and What We Focus on
言語
言語 eng
キーワード
主題Scheme Other
主題 [Regular Paper - Express Paper] saliency map, eye movement, spatiotemporal structure
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_6501
資源タイプ journal article
著者所属
Graduate School of Informatics, Kyoto University
著者所属
Graduate School of Informatics, Kyoto University
著者所属
Graduate School of Informatics, Kyoto University
著者所属(英)
en
Graduate School of Informatics, Kyoto University
著者所属(英)
en
Graduate School of Informatics, Kyoto University
著者所属(英)
en
Graduate School of Informatics, Kyoto University
著者名 Ryo, Yonetani Hiroaki, Kawashima Takashi, Matsuyama

× Ryo, Yonetani Hiroaki, Kawashima Takashi, Matsuyama

Ryo, Yonetani
Hiroaki, Kawashima
Takashi, Matsuyama

Search repository
著者名(英) Ryo, Yonetani Hiroaki, Kawashima Takashi, Matsuyama

× Ryo, Yonetani Hiroaki, Kawashima Takashi, Matsuyama

en Ryo, Yonetani
Hiroaki, Kawashima
Takashi, Matsuyama

Search repository
論文抄録
内容記述タイプ Other
内容記述 When we are watching videos, there are spatiotemporal gaps between where we look (points of gaze) and what we focus on (points of attentional focus), which result from temporally delayed responses or anticipation in eye movements. We focus on the underlying structure of those gaps and propose a novel learning-based model to predict where humans look in videos. The proposed model selects a relevant point of focus in the spatiotemporal neighborhood around a point of gaze, and jointly learns its salience and spatiotemporal gap with the point of gaze. It tells us “this point is likely to be looked at because there is a point of focus around the point with a reasonable spatiotemporal gap.” Experimental results with a public dataset demonstrate the effectiveness of the model to predict the points of gaze by learning a particular structure of gaps with respect to the types of eye movements and those of salient motions in videos.
論文抄録(英)
内容記述タイプ Other
内容記述 When we are watching videos, there are spatiotemporal gaps between where we look (points of gaze) and what we focus on (points of attentional focus), which result from temporally delayed responses or anticipation in eye movements. We focus on the underlying structure of those gaps and propose a novel learning-based model to predict where humans look in videos. The proposed model selects a relevant point of focus in the spatiotemporal neighborhood around a point of gaze, and jointly learns its salience and spatiotemporal gap with the point of gaze. It tells us “this point is likely to be looked at because there is a point of focus around the point with a reasonable spatiotemporal gap.” Experimental results with a public dataset demonstrate the effectiveness of the model to predict the points of gaze by learning a particular structure of gaps with respect to the types of eye movements and those of salient motions in videos.
書誌情報 IPSJ Transactions on Computer Vision and Applications (CVA)

巻 5, p. 75-79, 発行日 2013-07-29
ISSN
収録物識別子タイプ ISSN
収録物識別子 1882-6695
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-21 14:27:18.563482
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3