ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 論文誌(ジャーナル)
  2. Vol.55
  3. No.8

A Method for Embedding Context to Sound-based Life Log

https://ipsj.ixsq.nii.ac.jp/records/102603
https://ipsj.ixsq.nii.ac.jp/records/102603
30b94100-ddb8-4d57-8ea8-6b9995358249
名前 / ファイル ライセンス アクション
IPSJ-JNL5508024.pdf IPSJ-JNL5508024 (4.0 MB)
Copyright (c) 2014 by the Information Processing Society of Japan
オープンアクセス
Item type Journal(1)
公開日 2014-08-15
タイトル
タイトル A Method for Embedding Context to Sound-based Life Log
タイトル
言語 en
タイトル A Method for Embedding Context to Sound-based Life Log
言語
言語 eng
キーワード
主題Scheme Other
主題 [一般論文] wearable computing, gesture recognition, environment recognition, ultrasonic sound, life log, location recognition, person recognition
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_6501
資源タイプ journal article
著者所属
Graduate School of Engineering, Kobe University
著者所属
Graduate School of Engineering, Kobe University/PRESTO, Japan Science and Technology Agency
著者所属
Graduate School of Engineering, Kobe University
著者所属(英)
en
Graduate School of Engineering, Kobe University
著者所属(英)
en
Graduate School of Engineering, Kobe University / PRESTO, Japan Science and Technology Agency
著者所属(英)
en
Graduate School of Engineering, Kobe University
著者名 Hiroki, Watanabe

× Hiroki, Watanabe

Hiroki, Watanabe

Search repository
Tsutomu, Terada

× Tsutomu, Terada

Tsutomu, Terada

Search repository
Masahiko, Tsukamoto

× Masahiko, Tsukamoto

Masahiko, Tsukamoto

Search repository
著者名(英) Hiroki, Watanabe

× Hiroki, Watanabe

en Hiroki, Watanabe

Search repository
Tsutomu, Terada

× Tsutomu, Terada

en Tsutomu, Terada

Search repository
Masahiko, Tsukamoto

× Masahiko, Tsukamoto

en Masahiko, Tsukamoto

Search repository
論文抄録
内容記述タイプ Other
内容記述 Wearable computing technologies are attracting a great deal of attention on context-aware systems. They recognize user context by using wearable sensors. Though conventional context-aware systems use accelerometers or microphones, the former requires wearing many sensors and a storage such as PC for data storing, and the latter cannot recognize complex user motions. In this paper, we propose an activity and context recognition method where the user carries a neck-worn receiver comprising a microphone, and small speakers on his/her wrists that generate ultrasounds. The system recognizes gestures on the basis of the volume of the received sound and the Doppler effect. The former indicates the distance between the neck and wrists, and the latter indicates the speed of motions. We combine the gesture recognition by using ultrasound and conventional MFCC-based environmental-context recognition to recognize complex contexts from the recorded sound. Thus, our approach substitutes the wired or wireless communication typically required in body area motion sensing networks by ultrasounds. Our system also recognizes the place where the user is in and the people who are near the user by ID signals generated from speakers placed in rooms and on people. The strength of the approach is that, for offline recognition, a simple audio recorder can be used for the receiver. Contexts are embedded in the recorded sound all together, and this recorded sound creates a sound-based life log with context information. We evaluate the approach on nine gestures/activities with 10 users. Evaluation results confirmed that when there was no environmental sound generated from other people, the recognition rate was 86.6% on average. When there was environmental sound generated from other people, we compare an approach that selects used feature values depending on a situation against standard approach, which uses feature value of ultrasound and environmental sound. Results for the proposed approach are 64.3%, for the standard approach are 57.3%.

------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.22(2014) No.4 (online)
DOI http://dx.doi.org/10.2197/ipsjjip.22.651
------------------------------
論文抄録(英)
内容記述タイプ Other
内容記述 Wearable computing technologies are attracting a great deal of attention on context-aware systems. They recognize user context by using wearable sensors. Though conventional context-aware systems use accelerometers or microphones, the former requires wearing many sensors and a storage such as PC for data storing, and the latter cannot recognize complex user motions. In this paper, we propose an activity and context recognition method where the user carries a neck-worn receiver comprising a microphone, and small speakers on his/her wrists that generate ultrasounds. The system recognizes gestures on the basis of the volume of the received sound and the Doppler effect. The former indicates the distance between the neck and wrists, and the latter indicates the speed of motions. We combine the gesture recognition by using ultrasound and conventional MFCC-based environmental-context recognition to recognize complex contexts from the recorded sound. Thus, our approach substitutes the wired or wireless communication typically required in body area motion sensing networks by ultrasounds. Our system also recognizes the place where the user is in and the people who are near the user by ID signals generated from speakers placed in rooms and on people. The strength of the approach is that, for offline recognition, a simple audio recorder can be used for the receiver. Contexts are embedded in the recorded sound all together, and this recorded sound creates a sound-based life log with context information. We evaluate the approach on nine gestures/activities with 10 users. Evaluation results confirmed that when there was no environmental sound generated from other people, the recognition rate was 86.6% on average. When there was environmental sound generated from other people, we compare an approach that selects used feature values depending on a situation against standard approach, which uses feature value of ultrasound and environmental sound. Results for the proposed approach are 64.3%, for the standard approach are 57.3%.

------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.22(2014) No.4 (online)
DOI http://dx.doi.org/10.2197/ipsjjip.22.651
------------------------------
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN00116647
書誌情報 情報処理学会論文誌

巻 55, 号 8, 発行日 2014-08-15
ISSN
収録物識別子タイプ ISSN
収録物識別子 1882-7764
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-20 06:45:27.506061
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3