| Item type |
SIG Technical Reports(1) |
| 公開日 |
2023-05-11 |
| タイトル |
|
|
タイトル |
Towards Better Representation and Interpretability for Deep Neural Networks on Visual Tasks |
| タイトル |
|
|
言語 |
en |
|
タイトル |
Towards Better Representation and Interpretability for Deep Neural Networks on Visual Tasks |
| 言語 |
|
|
言語 |
eng |
| キーワード |
|
|
主題Scheme |
Other |
|
主題 |
D論セッション |
| 資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
| 著者所属 |
|
|
|
Osaka University |
| 著者所属 |
|
|
|
Osaka University |
| 著者所属 |
|
|
|
Osaka University |
| 著者所属 |
|
|
|
Osaka University |
| 著者所属(英) |
|
|
|
en |
|
|
Osaka University |
| 著者所属(英) |
|
|
|
en |
|
|
Osaka University |
| 著者所属(英) |
|
|
|
en |
|
|
Osaka University |
| 著者所属(英) |
|
|
|
en |
|
|
Osaka University |
| 著者名 |
Bowen, Wang
Liangzhi, Li
Yuta, Nakashima
Hajime, Nagahara
|
| 著者名(英) |
Bowen, Wang
Liangzhi, Li
Yuta, Nakashima
Hajime, Nagahara
|
| 論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Deep Neural Networks (DNNs) have shown their power in many research fields, and related applications are entering people's daily lives with unstoppable momentum. However, the large number of DNNs' training parameters causes difficulty in learning representation from real-world data efficiently, and the black-box nature harms its explainability. In this thesis, we will show how to design a DNN for better representation, as well as interpret its behavior for reliable artificial intelligence (AI). By embedding a slot-attention-based XAI module, we find that a DNN model is interpretable, and the learning of representation can be benefited from this interpretability. XAI methods are further extended to find representation in a simple classification task. The found representation is transferred as training data for a complex object detection task, realizing weak supervision. In two different real-world scenarios, we evaluate that our proposal can encourage DNNs to learn better representation and let them be interpretable. |
| 論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Deep Neural Networks (DNNs) have shown their power in many research fields, and related applications are entering people's daily lives with unstoppable momentum. However, the large number of DNNs' training parameters causes difficulty in learning representation from real-world data efficiently, and the black-box nature harms its explainability. In this thesis, we will show how to design a DNN for better representation, as well as interpret its behavior for reliable artificial intelligence (AI). By embedding a slot-attention-based XAI module, we find that a DNN model is interpretable, and the learning of representation can be benefited from this interpretability. XAI methods are further extended to find representation in a simple classification task. The found representation is transferred as training data for a complex object detection task, realizing weak supervision. In two different real-world scenarios, we evaluate that our proposal can encourage DNNs to learn better representation and let them be interpretable. |
| 書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA11131797 |
| 書誌情報 |
研究報告コンピュータビジョンとイメージメディア(CVIM)
巻 2023-CVIM-234,
号 2,
p. 1-16,
発行日 2023-05-11
|
| ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8701 |
| Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
| 出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |