| Item type |
SIG Technical Reports(1) |
| 公開日 |
2024-06-07 |
| タイトル |
|
|
タイトル |
Enhancing Feature Integration to Improve Classification Accuracy of Similar Categories in Acoustic Scene Classification |
| タイトル |
|
|
言語 |
en |
|
タイトル |
Enhancing Feature Integration to Improve Classification Accuracy of Similar Categories in Acoustic Scene Classification |
| 言語 |
|
|
言語 |
eng |
| キーワード |
|
|
主題Scheme |
Other |
|
主題 |
ポスターセッション2 |
| 資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
| 著者所属 |
|
|
|
The University of Tokyo |
| 著者所属 |
|
|
|
The University of Tokyo |
| 著者所属 |
|
|
|
The University of Tokyo |
| 著者所属(英) |
|
|
|
en |
|
|
The University of Tokyo |
| 著者所属(英) |
|
|
|
en |
|
|
The University of Tokyo |
| 著者所属(英) |
|
|
|
en |
|
|
The University of Tokyo |
| 著者名 |
Shuting, Hao
Daisuke, Saito
Nobuaki, Minematsu
|
| 著者名(英) |
Shuting, Hao
Daisuke, Saito
Nobuaki, Minematsu
|
| 論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
This study focuses on Acoustic Scene Classification (ASC), which categorizes environmental audio streams into predefined semantic labels. We introduce a novel architecture that integrates multi-layer classifiers and direct finetuning, presenting a new perspective in ASC research. The study employs the TAU Urban Acoustic Scenes 2022 Mobile dataset for fine-tuning and validation. We utilized the SSAST model, pre-trained on the AudioSet and LibriSpeech datasets, and fine-tuned it on the TAU dataset with a unique approach to enhance ASC-specific feature learning. Our layered SSAST system achieved an accuracy of 52.17% and an AUC of 88.66% in ASC, marking a notable improvement over the baseline with absolute increases of 0.99% in accuracy and 0.85% in AUC. |
| 論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
This study focuses on Acoustic Scene Classification (ASC), which categorizes environmental audio streams into predefined semantic labels. We introduce a novel architecture that integrates multi-layer classifiers and direct finetuning, presenting a new perspective in ASC research. The study employs the TAU Urban Acoustic Scenes 2022 Mobile dataset for fine-tuning and validation. We utilized the SSAST model, pre-trained on the AudioSet and LibriSpeech datasets, and fine-tuned it on the TAU dataset with a unique approach to enhance ASC-specific feature learning. Our layered SSAST system achieved an accuracy of 52.17% and an AUC of 88.66% in ASC, marking a notable improvement over the baseline with absolute increases of 0.99% in accuracy and 0.85% in AUC. |
| 書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10438388 |
| 書誌情報 |
研究報告音楽情報科学(MUS)
巻 2024-MUS-140,
号 53,
p. 1-5,
発行日 2024-06-07
|
| ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8752 |
| Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
| 出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |