ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. コンピュータビジョンとイメージメディア(CVIM)
  3. 2023
  4. 2023-CVIM-234

Towards Better Representation and Interpretability for Deep Neural Networks on Visual Tasks

https://ipsj.ixsq.nii.ac.jp/records/225946
https://ipsj.ixsq.nii.ac.jp/records/225946
e03fc12f-70d8-4070-9238-7f67fd963881
名前 / ファイル ライセンス アクション
IPSJ-CVIM23234002.pdf IPSJ-CVIM23234002.pdf (3.4 MB)
Copyright (c) 2023 by the Information Processing Society of Japan
オープンアクセス
Item type SIG Technical Reports(1)
公開日 2023-05-11
タイトル
タイトル Towards Better Representation and Interpretability for Deep Neural Networks on Visual Tasks
タイトル
言語 en
タイトル Towards Better Representation and Interpretability for Deep Neural Networks on Visual Tasks
言語
言語 eng
キーワード
主題Scheme Other
主題 D論セッション
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
Osaka University
著者所属
Osaka University
著者所属
Osaka University
著者所属
Osaka University
著者所属(英)
en
Osaka University
著者所属(英)
en
Osaka University
著者所属(英)
en
Osaka University
著者所属(英)
en
Osaka University
著者名 Bowen, Wang

× Bowen, Wang

Bowen, Wang

Search repository
Liangzhi, Li

× Liangzhi, Li

Liangzhi, Li

Search repository
Yuta, Nakashima

× Yuta, Nakashima

Yuta, Nakashima

Search repository
Hajime, Nagahara

× Hajime, Nagahara

Hajime, Nagahara

Search repository
著者名(英) Bowen, Wang

× Bowen, Wang

en Bowen, Wang

Search repository
Liangzhi, Li

× Liangzhi, Li

en Liangzhi, Li

Search repository
Yuta, Nakashima

× Yuta, Nakashima

en Yuta, Nakashima

Search repository
Hajime, Nagahara

× Hajime, Nagahara

en Hajime, Nagahara

Search repository
論文抄録
内容記述タイプ Other
内容記述 Deep Neural Networks (DNNs) have shown their power in many research fields, and related applications are entering people's daily lives with unstoppable momentum. However, the large number of DNNs' training parameters causes difficulty in learning representation from real-world data efficiently, and the black-box nature harms its explainability. In this thesis, we will show how to design a DNN for better representation, as well as interpret its behavior for reliable artificial intelligence (AI). By embedding a slot-attention-based XAI module, we find that a DNN model is interpretable, and the learning of representation can be benefited from this interpretability. XAI methods are further extended to find representation in a simple classification task. The found representation is transferred as training data for a complex object detection task, realizing weak supervision. In two different real-world scenarios, we evaluate that our proposal can encourage DNNs to learn better representation and let them be interpretable.
論文抄録(英)
内容記述タイプ Other
内容記述 Deep Neural Networks (DNNs) have shown their power in many research fields, and related applications are entering people's daily lives with unstoppable momentum. However, the large number of DNNs' training parameters causes difficulty in learning representation from real-world data efficiently, and the black-box nature harms its explainability. In this thesis, we will show how to design a DNN for better representation, as well as interpret its behavior for reliable artificial intelligence (AI). By embedding a slot-attention-based XAI module, we find that a DNN model is interpretable, and the learning of representation can be benefited from this interpretability. XAI methods are further extended to find representation in a simple classification task. The found representation is transferred as training data for a complex object detection task, realizing weak supervision. In two different real-world scenarios, we evaluate that our proposal can encourage DNNs to learn better representation and let them be interpretable.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AA11131797
書誌情報 研究報告コンピュータビジョンとイメージメディア(CVIM)

巻 2023-CVIM-234, 号 2, p. 1-16, 発行日 2023-05-11
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8701
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 12:38:01.190941
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3