Item type |
SIG Technical Reports(1) |
公開日 |
2017-06-16 |
タイトル |
|
|
タイトル |
Positive-Unlabeled Learning with Non-Negative Risk Estimator |
タイトル |
|
|
言語 |
en |
|
タイトル |
Positive-Unlabeled Learning with Non-Negative Risk Estimator |
言語 |
|
|
言語 |
eng |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_18gh |
|
資源タイプ |
technical report |
著者所属 |
|
|
|
Department of Computer Science, The University of Tokyo/Center for Advanced Intelligence Project, RIKEN |
著者所属 |
|
|
|
Department of Complexity Science and Engineering, The University of Tokyo |
著者所属 |
|
|
|
Center for Advanced Intelligence Project, RIKEN/Department of Complexity Science and Engineering, The University of Tokyo |
著者所属(英) |
|
|
|
en |
|
|
Department of Computer Science, The University of Tokyo / Center for Advanced Intelligence Project, RIKEN |
著者所属(英) |
|
|
|
en |
|
|
Department of Complexity Science and Engineering, The University of Tokyo |
著者所属(英) |
|
|
|
en |
|
|
Center for Advanced Intelligence Project, RIKEN / Department of Complexity Science and Engineering, The University of Tokyo |
著者名 |
Ryuichi, Kiryo
Gang, Niu
Masashi, Sugiyama
|
著者名(英) |
Ryuichi, Kiryo
Gang, Niu
Masashi, Sugiyama
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
From only positive (P) and unlabeled (U) data, a binary classifier can be trained with PU learning, in which the state of the art is unbiased PU learning. However, if its model is very flexible, its empirical risk on training data will go negative and we will suffer from serious overfltting. In this paper, we propose a non-negative risk estimator for PU learning. When being minimized, it is more robust against overfltting and thus we are able to train very flexible models given limited P data. Moreover, we analyze the bias, consistency and mean-squared-error reduction of the proposed risk estimator and the estimation error of the corresponding risk minimizer. Experiments show that the proposed risk estimator successfully fixes the overfitting problem of its unbiased counterparts. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
From only positive (P) and unlabeled (U) data, a binary classifier can be trained with PU learning, in which the state of the art is unbiased PU learning. However, if its model is very flexible, its empirical risk on training data will go negative and we will suffer from serious overfltting. In this paper, we propose a non-negative risk estimator for PU learning. When being minimized, it is more robust against overfltting and thus we are able to train very flexible models given limited P data. Moreover, we analyze the bias, consistency and mean-squared-error reduction of the proposed risk estimator and the estimation error of the corresponding risk minimizer. Experiments show that the proposed risk estimator successfully fixes the overfitting problem of its unbiased counterparts. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AN10505667 |
書誌情報 |
研究報告数理モデル化と問題解決(MPS)
巻 2017-MPS-113,
号 24,
p. 1-8,
発行日 2017-06-16
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
2188-8833 |
Notice |
|
|
|
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc. |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |