WEKO3
アイテム
Providing Interpretability of Document Classification by Deep Neural Network with Self-attention
https://ipsj.ixsq.nii.ac.jp/records/217665
https://ipsj.ixsq.nii.ac.jp/records/2176650616bbc0-62f9-400f-9712-9d3d560c7c2b
名前 / ファイル | ライセンス | アクション |
---|---|---|
![]() |
Copyright (c) 2022 by the Information Processing Society of Japan
|
|
オープンアクセス |
Item type | Trans(1) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
公開日 | 2022-04-07 | |||||||||||||
タイトル | ||||||||||||||
タイトル | Providing Interpretability of Document Classification by Deep Neural Network with Self-attention | |||||||||||||
タイトル | ||||||||||||||
言語 | en | |||||||||||||
タイトル | Providing Interpretability of Document Classification by Deep Neural Network with Self-attention | |||||||||||||
言語 | ||||||||||||||
言語 | eng | |||||||||||||
キーワード | ||||||||||||||
主題Scheme | Other | |||||||||||||
主題 | [研究論文] deep learning, new documents classification, self-attention, smooth-grad, LSTM | |||||||||||||
資源タイプ | ||||||||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_6501 | |||||||||||||
資源タイプ | journal article | |||||||||||||
著者所属 | ||||||||||||||
Kogakuin University | ||||||||||||||
著者所属 | ||||||||||||||
Kogakuin University | ||||||||||||||
著者所属 | ||||||||||||||
Kogakuin University | ||||||||||||||
著者所属 | ||||||||||||||
Kogakuin University | ||||||||||||||
著者所属(英) | ||||||||||||||
en | ||||||||||||||
Kogakuin University | ||||||||||||||
著者所属(英) | ||||||||||||||
en | ||||||||||||||
Kogakuin University | ||||||||||||||
著者所属(英) | ||||||||||||||
en | ||||||||||||||
Kogakuin University | ||||||||||||||
著者所属(英) | ||||||||||||||
en | ||||||||||||||
Kogakuin University | ||||||||||||||
著者名 |
Atsuki, Tamekuri
× Atsuki, Tamekuri
× Kosuke, Nakamura
× Yoshihaya, Takahashi
× Saneyasu, Yamaguchi
|
|||||||||||||
著者名(英) |
Atsuki, Tamekuri
× Atsuki, Tamekuri
× Kosuke, Nakamura
× Yoshihaya, Takahashi
× Saneyasu, Yamaguchi
|
|||||||||||||
論文抄録 | ||||||||||||||
内容記述タイプ | Other | |||||||||||||
内容記述 | Deep learning has been widely used in natural language processing (NLP) such as document classification. For example, self-attention has achieved significant improvement in NLP. However, it has been pointed out that although deep learning accurately classifies documents, it is difficult for users to interpret the basis of the decision. In this paper, we focus on the task of classifying open-data news documents by their theme with a deep neural network with self-attention. We then propose methods for providing the interpretability for these classifications. First, we classify news documents by LSTM with a self-attention mechanism and then show that the network can classify documents highly accurately. Second, we propose five methods for providing the basis of the decision by focusing on various values, e.g., attention, the gradient between input and output values of a neural network, and classification results of a document with one word. Finally, we evaluate the performance of these methods in four evaluating ways and show that these methods can present interpretability suitably. In particular, the methods based on documents with one word can provide interpretability, which is extracting the words that have a strong influence on the classification results. ------------------------------ This is a preprint of an article intended for publication Journal of Information Processing(JIP). This preprint should not be cited. This article should be cited as: Journal of Information Processing Vol.30(2022) (online) ------------------------------ |
|||||||||||||
論文抄録(英) | ||||||||||||||
内容記述タイプ | Other | |||||||||||||
内容記述 | Deep learning has been widely used in natural language processing (NLP) such as document classification. For example, self-attention has achieved significant improvement in NLP. However, it has been pointed out that although deep learning accurately classifies documents, it is difficult for users to interpret the basis of the decision. In this paper, we focus on the task of classifying open-data news documents by their theme with a deep neural network with self-attention. We then propose methods for providing the interpretability for these classifications. First, we classify news documents by LSTM with a self-attention mechanism and then show that the network can classify documents highly accurately. Second, we propose five methods for providing the basis of the decision by focusing on various values, e.g., attention, the gradient between input and output values of a neural network, and classification results of a document with one word. Finally, we evaluate the performance of these methods in four evaluating ways and show that these methods can present interpretability suitably. In particular, the methods based on documents with one word can provide interpretability, which is extracting the words that have a strong influence on the classification results. ------------------------------ This is a preprint of an article intended for publication Journal of Information Processing(JIP). This preprint should not be cited. This article should be cited as: Journal of Information Processing Vol.30(2022) (online) ------------------------------ |
|||||||||||||
書誌レコードID | ||||||||||||||
収録物識別子タイプ | NCID | |||||||||||||
収録物識別子 | AA11464847 | |||||||||||||
書誌情報 |
情報処理学会論文誌データベース(TOD) 巻 15, 号 2, 発行日 2022-04-07 |
|||||||||||||
ISSN | ||||||||||||||
収録物識別子タイプ | ISSN | |||||||||||||
収録物識別子 | 1882-7799 | |||||||||||||
出版者 | ||||||||||||||
言語 | ja | |||||||||||||
出版者 | 情報処理学会 |