Item type |
SIG Technical Reports(1) |
公開日 |
2015-05-11 |
タイトル |
|
|
タイトル |
Viewpoint-independent Action Recognition Method using Depth Image |
タイトル |
|
|
言語 |
en |
|
タイトル |
Viewpoint-independent Action Recognition Method using Depth Image |
言語 |
|
|
言語 |
eng |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
Hitachi, Ltd., Research & Development Group, Center for Technology Innovation - Controls |
著者所属 |
|
|
|
Chubu University |
著者所属(英) |
|
|
|
en |
|
|
Hitachi, Ltd., Research & Development Group, Center for Technology Innovation - Controls |
著者所属(英) |
|
|
|
en |
|
|
Chubu University |
著者名 |
Ryo, Yumiba
Hironobu, Fujiyoshi
|
著者名(英) |
Ryo, Yumiba
Hironobu, Fujiyoshi
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
In this paper we propose action recognition methods that use depth images for recognizing action independently on the viewpoints. We propose a method for reducing the amount of influence from changes in orientation of the people being observed, while suppressing the required quantity of training samples for dealing with the viewpoint changes between when learning and when recognizing. We first create the three-view drawings expansion by virtually changing the viewpoints within a predetermined range from the training samples when learning, and learn the weak classifier candidates respective to each viewpoint for discriminating the action categories. Then, we learn a strong classifier from these weak classifier candidates and limited number of training samples that is suitable for the viewpoint during recognition. Furthermore, we propose a method for accepting cases when the camera is too close to the people and some part of their body, i.e., an arm or leg, become visually deficient because it protrudes outside the given viewing angle in order to enlarge the region coverage for the action recognition. In this method, arbitrary motion features that are outside a given viewing angle are compensated for before discriminating the action categories by using a regression estimate that is based on a correlation between the motion features of the body parts outside the viewing angle and that of full body images. The experimental results showed that the action categories could be successfully recognized using the proposed methods even under influences from some changes in orientation or visual deficits of the people, when compared to conventional action recognition methods. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
In this paper we propose action recognition methods that use depth images for recognizing action independently on the viewpoints. We propose a method for reducing the amount of influence from changes in orientation of the people being observed, while suppressing the required quantity of training samples for dealing with the viewpoint changes between when learning and when recognizing. We first create the three-view drawings expansion by virtually changing the viewpoints within a predetermined range from the training samples when learning, and learn the weak classifier candidates respective to each viewpoint for discriminating the action categories. Then, we learn a strong classifier from these weak classifier candidates and limited number of training samples that is suitable for the viewpoint during recognition. Furthermore, we propose a method for accepting cases when the camera is too close to the people and some part of their body, i.e., an arm or leg, become visually deficient because it protrudes outside the given viewing angle in order to enlarge the region coverage for the action recognition. In this method, arbitrary motion features that are outside a given viewing angle are compensated for before discriminating the action categories by using a regression estimate that is based on a correlation between the motion features of the body parts outside the viewing angle and that of full body images. The experimental results showed that the action categories could be successfully recognized using the proposed methods even under influences from some changes in orientation or visual deficits of the people, when compared to conventional action recognition methods. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA11131797 |
書誌情報 |
研究報告コンピュータビジョンとイメージメディア(CVIM)
巻 2015-CVIM-197,
号 30,
p. 1-16,
発行日 2015-05-11
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8701 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |