Item type |
SIG Technical Reports(1) |
公開日 |
2022-01-20 |
タイトル |
|
|
タイトル |
Acoustic Human Action Classification and 3D Human Pose Estimation |
タイトル |
|
|
言語 |
en |
|
タイトル |
Acoustic Human Action Classification and 3D Human Pose Estimation |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
行動認識 |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
Keio University |
著者所属 |
|
|
|
Keio University |
著者所属 |
|
|
|
Nippon Telegraph and Telephone Corporation |
著者所属 |
|
|
|
Nippon Telegraph and Telephone Corporation |
著者所属 |
|
|
|
Nippon Telegraph and Telephone Corporation |
著者所属 |
|
|
|
Keio University |
著者所属(英) |
|
|
|
en |
|
|
Keio University |
著者所属(英) |
|
|
|
en |
|
|
Keio University |
著者所属(英) |
|
|
|
en |
|
|
Nippon Telegraph and Telephone Corporation |
著者所属(英) |
|
|
|
en |
|
|
Nippon Telegraph and Telephone Corporation |
著者所属(英) |
|
|
|
en |
|
|
Nippon Telegraph and Telephone Corporation |
著者所属(英) |
|
|
|
en |
|
|
Keio University |
著者名 |
Yutaka, Kawashima
Yuto, Shibata
Mariko, Isogawa
Go, Irie
Akisato, Kimura
Yoshimitsu, Aoki
|
著者名(英) |
Yutaka, Kawashima
Yuto, Shibata
Mariko, Isogawa
Go, Irie
Akisato, Kimura
Yoshimitsu, Aoki
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Most of the existing methods for inferring human behavior such as actions or poses use visible light or wireless signals as a clue to estimate it. However, visible light is easily restricted by poor lighting conditions (, dark rooms, night roads). Wireless signals often have limited use (, highly instrumented patient care areas where electronic devices must remain off). Unlike these methods, we explore how low-level acoustic signals can provide enough clues to estimate human behavior by active acoustic sensing with a single pair of microphones and loudspeakers (see Fig. 1). This is quite a challenging task since sound is much more diffractive than visible light or RF/WiFi that most of existing method use and therefore covers up the shape of objects in a scene. To this end, we introduce a framework that encodes multichannel audio features into human activity classes or 3D human poses. Our framework only requires a minimal active sensing system with a single pair of ambisonics microphone and loudspeakers to enable estimation. Aiming at capturing subtle sound changes to reveal detailed pose information, we explicitly extract phase features from the recorded audio signals together with typical spectrum features, and feed them into our 1D convolutional neural network to learn non-linear mappings from the features to the target. Our experiments suggest that with the use of only low-dimensional acoustic information, our method outperforms baseline methods. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Most of the existing methods for inferring human behavior such as actions or poses use visible light or wireless signals as a clue to estimate it. However, visible light is easily restricted by poor lighting conditions (, dark rooms, night roads). Wireless signals often have limited use (, highly instrumented patient care areas where electronic devices must remain off). Unlike these methods, we explore how low-level acoustic signals can provide enough clues to estimate human behavior by active acoustic sensing with a single pair of microphones and loudspeakers (see Fig. 1). This is quite a challenging task since sound is much more diffractive than visible light or RF/WiFi that most of existing method use and therefore covers up the shape of objects in a scene. To this end, we introduce a framework that encodes multichannel audio features into human activity classes or 3D human poses. Our framework only requires a minimal active sensing system with a single pair of ambisonics microphone and loudspeakers to enable estimation. Aiming at capturing subtle sound changes to reveal detailed pose information, we explicitly extract phase features from the recorded audio signals together with typical spectrum features, and feed them into our 1D convolutional neural network to learn non-linear mappings from the features to the target. Our experiments suggest that with the use of only low-dimensional acoustic information, our method outperforms baseline methods. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA11131797 |
書誌情報 |
研究報告コンピュータビジョンとイメージメディア(CVIM)
巻 2022-CVIM-228,
号 25,
p. 1-8,
発行日 2022-01-20
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8701 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |