ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 論文誌(トランザクション)
  2. データベース(TOD)[電子情報通信学会データ工学研究専門委員会共同編集]
  3. Vol.15
  4. No.2

Providing Interpretability of Document Classification by Deep Neural Network with Self-attention

https://ipsj.ixsq.nii.ac.jp/records/217665
https://ipsj.ixsq.nii.ac.jp/records/217665
0616bbc0-62f9-400f-9712-9d3d560c7c2b
名前 / ファイル ライセンス アクション
IPSJ-TOD1502003.pdf IPSJ-TOD1502003.pdf (5.9 MB)
Copyright (c) 2022 by the Information Processing Society of Japan
オープンアクセス
Item type Trans(1)
公開日 2022-04-07
タイトル
タイトル Providing Interpretability of Document Classification by Deep Neural Network with Self-attention
タイトル
言語 en
タイトル Providing Interpretability of Document Classification by Deep Neural Network with Self-attention
言語
言語 eng
キーワード
主題Scheme Other
主題 [研究論文] deep learning, new documents classification, self-attention, smooth-grad, LSTM
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_6501
資源タイプ journal article
著者所属
Kogakuin University
著者所属
Kogakuin University
著者所属
Kogakuin University
著者所属
Kogakuin University
著者所属(英)
en
Kogakuin University
著者所属(英)
en
Kogakuin University
著者所属(英)
en
Kogakuin University
著者所属(英)
en
Kogakuin University
著者名 Atsuki, Tamekuri

× Atsuki, Tamekuri

Atsuki, Tamekuri

Search repository
Kosuke, Nakamura

× Kosuke, Nakamura

Kosuke, Nakamura

Search repository
Yoshihaya, Takahashi

× Yoshihaya, Takahashi

Yoshihaya, Takahashi

Search repository
Saneyasu, Yamaguchi

× Saneyasu, Yamaguchi

Saneyasu, Yamaguchi

Search repository
著者名(英) Atsuki, Tamekuri

× Atsuki, Tamekuri

en Atsuki, Tamekuri

Search repository
Kosuke, Nakamura

× Kosuke, Nakamura

en Kosuke, Nakamura

Search repository
Yoshihaya, Takahashi

× Yoshihaya, Takahashi

en Yoshihaya, Takahashi

Search repository
Saneyasu, Yamaguchi

× Saneyasu, Yamaguchi

en Saneyasu, Yamaguchi

Search repository
論文抄録
内容記述タイプ Other
内容記述 Deep learning has been widely used in natural language processing (NLP) such as document classification. For example, self-attention has achieved significant improvement in NLP. However, it has been pointed out that although deep learning accurately classifies documents, it is difficult for users to interpret the basis of the decision. In this paper, we focus on the task of classifying open-data news documents by their theme with a deep neural network with self-attention. We then propose methods for providing the interpretability for these classifications. First, we classify news documents by LSTM with a self-attention mechanism and then show that the network can classify documents highly accurately. Second, we propose five methods for providing the basis of the decision by focusing on various values, e.g., attention, the gradient between input and output values of a neural network, and classification results of a document with one word. Finally, we evaluate the performance of these methods in four evaluating ways and show that these methods can present interpretability suitably. In particular, the methods based on documents with one word can provide interpretability, which is extracting the words that have a strong influence on the classification results.
------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.30(2022) (online)
------------------------------
論文抄録(英)
内容記述タイプ Other
内容記述 Deep learning has been widely used in natural language processing (NLP) such as document classification. For example, self-attention has achieved significant improvement in NLP. However, it has been pointed out that although deep learning accurately classifies documents, it is difficult for users to interpret the basis of the decision. In this paper, we focus on the task of classifying open-data news documents by their theme with a deep neural network with self-attention. We then propose methods for providing the interpretability for these classifications. First, we classify news documents by LSTM with a self-attention mechanism and then show that the network can classify documents highly accurately. Second, we propose five methods for providing the basis of the decision by focusing on various values, e.g., attention, the gradient between input and output values of a neural network, and classification results of a document with one word. Finally, we evaluate the performance of these methods in four evaluating ways and show that these methods can present interpretability suitably. In particular, the methods based on documents with one word can provide interpretability, which is extracting the words that have a strong influence on the classification results.
------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.30(2022) (online)
------------------------------
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AA11464847
書誌情報 情報処理学会論文誌データベース(TOD)

巻 15, 号 2, 発行日 2022-04-07
ISSN
収録物識別子タイプ ISSN
収録物識別子 1882-7799
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 15:24:58.391526
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3