Item type |
Trans(1) |
公開日 |
2015-11-26 |
タイトル |
|
|
タイトル |
Auxiliary Training Information Assisted Visual Recognition |
タイトル |
|
|
言語 |
en |
|
タイトル |
Auxiliary Training Information Assisted Visual Recognition |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
[Regular Paper - Research Paper] collaborative auxiliary learning, canonical correlation, discriminative learning |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_6501 |
|
資源タイプ |
journal article |
著者所属 |
|
|
|
Stevens Institute of Technolog |
著者所属 |
|
|
|
Stevens Institute of Technolog/Microsoft Research Asia |
著者所属 |
|
|
|
IBM Thomas J. Watson Research Center |
著者所属 |
|
|
|
Microsoft Research |
著者所属 |
|
|
|
Microsoft Research |
著者所属(英) |
|
|
|
en |
|
|
Stevens Institute of Technolog |
著者所属(英) |
|
|
|
en |
|
|
Stevens Institute of Technolog / Microsoft Research Asia |
著者所属(英) |
|
|
|
en |
|
|
IBM Thomas J. Watson Research Center |
著者所属(英) |
|
|
|
en |
|
|
Microsoft Research |
著者所属(英) |
|
|
|
en |
|
|
Microsoft Research |
著者名 |
Qilin, Zhang
Gang, Hua
Wei, Liu
Zicheng, Liu
Zhengyou, Zhang
|
著者名(英) |
Qilin, Zhang
Gang, Hua
Wei, Liu
Zicheng, Liu
Zhengyou, Zhang
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
In the realm of multi-modal visual recognition, the reliability of the data acquisition system is often a concern due to the increased complexity of the sensors. One of the major issues is the accidental loss of one or more sensing channels, which poses a major challenge to current learning systems. In this paper, we examine one of these specific missing data problems, where we have a main modality/view along with an auxiliary modality/view present in the training data, but merely the main modality/view in the test data. To effectively leverage the auxiliary information to train a stronger classifier, we propose a collaborative auxiliary learning framework based on a new discriminative canonical correlation analysis. This framework reveals a common semantic space shared across both modalities/views through enforcing a series of nonlinear projections. Such projections automatically embed the discriminative cues hidden in both modalities/views into the common space, and better visual recognition is thus achieved on the test data. The efficacy of our proposed auxiliary learning approach is demonstrated through four challenging visual recognition tasks with different kinds of auxiliary information. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
In the realm of multi-modal visual recognition, the reliability of the data acquisition system is often a concern due to the increased complexity of the sensors. One of the major issues is the accidental loss of one or more sensing channels, which poses a major challenge to current learning systems. In this paper, we examine one of these specific missing data problems, where we have a main modality/view along with an auxiliary modality/view present in the training data, but merely the main modality/view in the test data. To effectively leverage the auxiliary information to train a stronger classifier, we propose a collaborative auxiliary learning framework based on a new discriminative canonical correlation analysis. This framework reveals a common semantic space shared across both modalities/views through enforcing a series of nonlinear projections. Such projections automatically embed the discriminative cues hidden in both modalities/views into the common space, and better visual recognition is thus achieved on the test data. The efficacy of our proposed auxiliary learning approach is demonstrated through four challenging visual recognition tasks with different kinds of auxiliary information. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA12628065 |
書誌情報 |
IPSJ Transactions on Computer Vision and Applications (CVA)
巻 7,
p. 138-150,
発行日 2015-11-26
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
1882-6695 |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |