WEKO3
アイテム
Stetho Touch: Touch Action Recognition System by Deep Learning with Stethoscope Acoustic Sensing
https://ipsj.ixsq.nii.ac.jp/records/220346
https://ipsj.ixsq.nii.ac.jp/records/2203469cd659e5-7621-4561-9ecd-9936b2049240
名前 / ファイル | ライセンス | アクション |
---|---|---|
![]() |
Copyright (c) 2022 by the Information Processing Society of Japan
|
|
オープンアクセス |
Item type | Journal(1) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
公開日 | 2022-10-15 | |||||||||||
タイトル | ||||||||||||
タイトル | Stetho Touch: Touch Action Recognition System by Deep Learning with Stethoscope Acoustic Sensing | |||||||||||
タイトル | ||||||||||||
言語 | en | |||||||||||
タイトル | Stetho Touch: Touch Action Recognition System by Deep Learning with Stethoscope Acoustic Sensing | |||||||||||
言語 | ||||||||||||
言語 | eng | |||||||||||
キーワード | ||||||||||||
主題Scheme | Other | |||||||||||
主題 | [一般論文] machine learning, deep learning, CHI, acoustic sensing | |||||||||||
資源タイプ | ||||||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_6501 | |||||||||||
資源タイプ | journal article | |||||||||||
著者所属 | ||||||||||||
Graduate of Faculty of Engineering, Sophia University | ||||||||||||
著者所属 | ||||||||||||
Graduate of Faculty of Engineering, Sophia University | ||||||||||||
著者所属 | ||||||||||||
Graduate of Faculty of Engineering, Sophia University | ||||||||||||
著者所属(英) | ||||||||||||
en | ||||||||||||
Graduate of Faculty of Engineering, Sophia University | ||||||||||||
著者所属(英) | ||||||||||||
en | ||||||||||||
Graduate of Faculty of Engineering, Sophia University | ||||||||||||
著者所属(英) | ||||||||||||
en | ||||||||||||
Graduate of Faculty of Engineering, Sophia University | ||||||||||||
著者名 |
Nagisa, Masuda
× Nagisa, Masuda
× Koichi, Furukawa
× Ikuko, Eguchi Yairi
|
|||||||||||
著者名(英) |
Nagisa, Masuda
× Nagisa, Masuda
× Koichi, Furukawa
× Ikuko, Eguchi Yairi
|
|||||||||||
論文抄録 | ||||||||||||
内容記述タイプ | Other | |||||||||||
内容記述 | Developing a new IoT device input method that can reduce the burden on users has become an important issue. This paper proposed a system Stetho Touch that identifies touch actions using acoustic information obtained when a user's finger makes contact with a solid object. To investigate the method, we implemented a prototype of an acoustic sensing device consisting of a low-pressure melamine veneer table, a stethoscope, and an audio interface. The CNN-LSTM classification model of combining CNN and LSTM classified the five touch actions with accuracy 88.26%, f-score 87.26% in LOSO and accuracy 99.39, f-score 99.39 in 18-fold cross-validation. The contributions of this paper are the following; (1) proposed a touch action recognition method using acoustic information that is more natural and accurate than existing methods, (2) evaluated a touch action recognition method using Deep Learning that can be processed in real-time using acoustic time series raw data as input, and (3) proved the compensations for the user dependence of touch actions by providing a learning phase or performing sequential learning during use. ------------------------------ This is a preprint of an article intended for publication Journal of Information Processing(JIP). This preprint should not be cited. This article should be cited as: Journal of Information Processing Vol.30(2022) (online) DOI http://dx.doi.org/10.2197/ipsjjip.30.718 ------------------------------ |
|||||||||||
論文抄録(英) | ||||||||||||
内容記述タイプ | Other | |||||||||||
内容記述 | Developing a new IoT device input method that can reduce the burden on users has become an important issue. This paper proposed a system Stetho Touch that identifies touch actions using acoustic information obtained when a user's finger makes contact with a solid object. To investigate the method, we implemented a prototype of an acoustic sensing device consisting of a low-pressure melamine veneer table, a stethoscope, and an audio interface. The CNN-LSTM classification model of combining CNN and LSTM classified the five touch actions with accuracy 88.26%, f-score 87.26% in LOSO and accuracy 99.39, f-score 99.39 in 18-fold cross-validation. The contributions of this paper are the following; (1) proposed a touch action recognition method using acoustic information that is more natural and accurate than existing methods, (2) evaluated a touch action recognition method using Deep Learning that can be processed in real-time using acoustic time series raw data as input, and (3) proved the compensations for the user dependence of touch actions by providing a learning phase or performing sequential learning during use. ------------------------------ This is a preprint of an article intended for publication Journal of Information Processing(JIP). This preprint should not be cited. This article should be cited as: Journal of Information Processing Vol.30(2022) (online) DOI http://dx.doi.org/10.2197/ipsjjip.30.718 ------------------------------ |
|||||||||||
書誌レコードID | ||||||||||||
収録物識別子タイプ | NCID | |||||||||||
収録物識別子 | AN00116647 | |||||||||||
書誌情報 |
情報処理学会論文誌 巻 63, 号 10, 発行日 2022-10-15 |
|||||||||||
ISSN | ||||||||||||
収録物識別子タイプ | ISSN | |||||||||||
収録物識別子 | 1882-7764 | |||||||||||
公開者 | ||||||||||||
言語 | ja | |||||||||||
出版者 | 情報処理学会 |