Item type |
Trans(1) |
公開日 |
2017-12-13 |
タイトル |
|
|
タイトル |
Randomized Kernel Mean Networks for Bag-of-Words Data |
タイトル |
|
|
言語 |
en |
|
タイトル |
Randomized Kernel Mean Networks for Bag-of-Words Data |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
[オリジナル論文] random Fourier features, bag-of-words, neural networks |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_6501 |
|
資源タイプ |
journal article |
著者所属 |
|
|
|
Software Technology and Artificial Intelligence Research Laboratory, Chiba Institute of Technology |
著者所属 |
|
|
|
NTT Communication Science Laboratories |
著者所属(英) |
|
|
|
en |
|
|
Software Technology and Artificial Intelligence Research Laboratory, Chiba Institute of Technology |
著者所属(英) |
|
|
|
en |
|
|
NTT Communication Science Laboratories |
著者名 |
Yuya, Yoshikawa
Tomoharu, Iwata
|
著者名(英) |
Yuya, Yoshikawa
Tomoharu, Iwata
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
In various machine learning problems, bag-of-words (BoW) representation, i.e., a multiset of features, is widely used as a simple and general data representation. Deep learning is successfully used in many areas. However, with BoW data, deep learning models are often outperformed by kernel methods such as support vector machines (SVMs), where each sample is simply transformed into a fixed-length count vector for the input. In this paper, we propose a deep learning model for BoW data. Based on the idea introduced in the framework of SVMs that has achieved a better performance in BoW count vector inputs, the proposed model assigns each feature to a latent vector, and each sample is represented by a distribution of the latent vectors of features contained in the sample. To transform the distribution efficiently and nonparametrically to the inputs of deep learning, we integrate kernel mean embeddings and a random Fourier feature algorithm. Our experiments verify the effectiveness of the proposed model on BoW document datasets. Because the proposed model is a general framework for BoW data, it can be applied directly to various supervised and unsupervised learning tasks. Moreover, because the proposed model can be combined with existing deep learning models, it further extends the potential applications of deep learning. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
In various machine learning problems, bag-of-words (BoW) representation, i.e., a multiset of features, is widely used as a simple and general data representation. Deep learning is successfully used in many areas. However, with BoW data, deep learning models are often outperformed by kernel methods such as support vector machines (SVMs), where each sample is simply transformed into a fixed-length count vector for the input. In this paper, we propose a deep learning model for BoW data. Based on the idea introduced in the framework of SVMs that has achieved a better performance in BoW count vector inputs, the proposed model assigns each feature to a latent vector, and each sample is represented by a distribution of the latent vectors of features contained in the sample. To transform the distribution efficiently and nonparametrically to the inputs of deep learning, we integrate kernel mean embeddings and a random Fourier feature algorithm. Our experiments verify the effectiveness of the proposed model on BoW document datasets. Because the proposed model is a general framework for BoW data, it can be applied directly to various supervised and unsupervised learning tasks. Moreover, because the proposed model can be combined with existing deep learning models, it further extends the potential applications of deep learning. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA11464803 |
書誌情報 |
情報処理学会論文誌数理モデル化と応用(TOM)
巻 10,
号 3,
p. 32-38,
発行日 2017-12-13
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
1882-7780 |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |