WEKO3
アイテム
MirrorNet: A Deep Reflective Approach to 2D Pose Estimation for Single-Person Images
https://ipsj.ixsq.nii.ac.jp/records/211203
https://ipsj.ixsq.nii.ac.jp/records/2112036bae865f-a555-49ed-bc50-f4405ed02e93
名前 / ファイル | ライセンス | アクション |
---|---|---|
![]() |
Copyright (c) 2021 by the Information Processing Society of Japan
|
|
オープンアクセス |
Item type | Journal(1) | |||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
公開日 | 2021-05-15 | |||||||||||||||||
タイトル | ||||||||||||||||||
タイトル | MirrorNet: A Deep Reflective Approach to 2D Pose Estimation for Single-Person Images | |||||||||||||||||
タイトル | ||||||||||||||||||
言語 | en | |||||||||||||||||
タイトル | MirrorNet: A Deep Reflective Approach to 2D Pose Estimation for Single-Person Images | |||||||||||||||||
言語 | ||||||||||||||||||
言語 | eng | |||||||||||||||||
キーワード | ||||||||||||||||||
主題Scheme | Other | |||||||||||||||||
主題 | [一般論文] 2D pose estimation, amortized variational inference, variational autoencoder, mirror system | |||||||||||||||||
資源タイプ | ||||||||||||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_6501 | |||||||||||||||||
資源タイプ | journal article | |||||||||||||||||
著者所属 | ||||||||||||||||||
Waseda University | ||||||||||||||||||
著者所属 | ||||||||||||||||||
Kyoto University | ||||||||||||||||||
著者所属 | ||||||||||||||||||
National Institute of Advanced Industrial Science and Technology (AIST) | ||||||||||||||||||
著者所属 | ||||||||||||||||||
National Institute of Advanced Industrial Science and Technology (AIST) | ||||||||||||||||||
著者所属 | ||||||||||||||||||
National Institute of Advanced Industrial Science and Technology (AIST) | ||||||||||||||||||
著者所属 | ||||||||||||||||||
Waseda Research Institute for Science and Engineering | ||||||||||||||||||
著者所属(英) | ||||||||||||||||||
en | ||||||||||||||||||
Waseda University | ||||||||||||||||||
著者所属(英) | ||||||||||||||||||
en | ||||||||||||||||||
Kyoto University | ||||||||||||||||||
著者所属(英) | ||||||||||||||||||
en | ||||||||||||||||||
National Institute of Advanced Industrial Science and Technology (AIST) | ||||||||||||||||||
著者所属(英) | ||||||||||||||||||
en | ||||||||||||||||||
National Institute of Advanced Industrial Science and Technology (AIST) | ||||||||||||||||||
著者所属(英) | ||||||||||||||||||
en | ||||||||||||||||||
National Institute of Advanced Industrial Science and Technology (AIST) | ||||||||||||||||||
著者所属(英) | ||||||||||||||||||
en | ||||||||||||||||||
Waseda Research Institute for Science and Engineering | ||||||||||||||||||
著者名 |
Takayuki, Nakatsuka
× Takayuki, Nakatsuka
× Kazuyoshi, Yoshii
× Yuki, Koyama
× Satoru, Fukayama
× Masataka, Goto
× Shigeo, Morishima
|
|||||||||||||||||
著者名(英) |
Takayuki, Nakatsuka
× Takayuki, Nakatsuka
× Kazuyoshi, Yoshii
× Yuki, Koyama
× Satoru, Fukayama
× Masataka, Goto
× Shigeo, Morishima
|
|||||||||||||||||
論文抄録 | ||||||||||||||||||
内容記述タイプ | Other | |||||||||||||||||
内容記述 | This paper proposes a statistical approach to 2D pose estimation from human images. The main problems with the standard supervised approach, which is based on a deep recognition (image-to-pose) model, are that it often yields anatomically implausible poses, and its performance is limited by the amount of paired data. To solve these problems, we propose a semi-supervised method that can make effective use of images with and without pose annotations. Specifically, we formulate a hierarchical generative model of poses and images by integrating a deep generative model of poses from pose features with that of images from poses and image features. We then introduce a deep recognition model that infers poses from images. Given images as observed data, these models can be trained jointly in a hierarchical variational autoencoding (image-to-pose-to-feature-to-pose-to-image) manner. The results of experiments show that the proposed reflective architecture makes estimated poses anatomically plausible, and the pose estimation performance is improved by integrating the recognition and generative models and also by feeding non-annotated images. ------------------------------ This is a preprint of an article intended for publication Journal of Information Processing(JIP). This preprint should not be cited. This article should be cited as: Journal of Information Processing Vol.29(2021) (online) DOI http://dx.doi.org/10.2197/ipsjjip.29.406 ------------------------------ |
|||||||||||||||||
論文抄録(英) | ||||||||||||||||||
内容記述タイプ | Other | |||||||||||||||||
内容記述 | This paper proposes a statistical approach to 2D pose estimation from human images. The main problems with the standard supervised approach, which is based on a deep recognition (image-to-pose) model, are that it often yields anatomically implausible poses, and its performance is limited by the amount of paired data. To solve these problems, we propose a semi-supervised method that can make effective use of images with and without pose annotations. Specifically, we formulate a hierarchical generative model of poses and images by integrating a deep generative model of poses from pose features with that of images from poses and image features. We then introduce a deep recognition model that infers poses from images. Given images as observed data, these models can be trained jointly in a hierarchical variational autoencoding (image-to-pose-to-feature-to-pose-to-image) manner. The results of experiments show that the proposed reflective architecture makes estimated poses anatomically plausible, and the pose estimation performance is improved by integrating the recognition and generative models and also by feeding non-annotated images. ------------------------------ This is a preprint of an article intended for publication Journal of Information Processing(JIP). This preprint should not be cited. This article should be cited as: Journal of Information Processing Vol.29(2021) (online) DOI http://dx.doi.org/10.2197/ipsjjip.29.406 ------------------------------ |
|||||||||||||||||
書誌レコードID | ||||||||||||||||||
収録物識別子タイプ | NCID | |||||||||||||||||
収録物識別子 | AN00116647 | |||||||||||||||||
書誌情報 |
情報処理学会論文誌 巻 62, 号 5, 発行日 2021-05-15 |
|||||||||||||||||
ISSN | ||||||||||||||||||
収録物識別子タイプ | ISSN | |||||||||||||||||
収録物識別子 | 1882-7764 |