35,401 research outputs found
Mutual Exclusivity Loss for Semi-Supervised Deep Learning
In this paper we consider the problem of semi-supervised learning with deep
Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated
on the observation that unlabeled data is cheap and can be used to improve the
accuracy of classifiers. In this paper we propose an unsupervised
regularization term that explicitly forces the classifier's prediction for
multiple classes to be mutually-exclusive and effectively guides the decision
boundary to lie on the low density space between the manifolds corresponding to
different classes of data. Our proposed approach is general and can be used
with any backpropagation-based learning method. We show through different
experiments that our method can improve the object recognition performance of
ConvNets using unlabeled data.Comment: 5 pages, 1 figures, ICIP 201
Projected Estimators for Robust Semi-supervised Classification
For semi-supervised techniques to be applied safely in practice we at least
want methods to outperform their supervised counterparts. We study this
question for classification using the well-known quadratic surrogate loss
function. Using a projection of the supervised estimate onto a set of
constraints imposed by the unlabeled data, we find we can safely improve over
the supervised solution in terms of this quadratic loss. Unlike other
approaches to semi-supervised learning, the procedure does not rely on
assumptions that are not intrinsic to the classifier at hand. It is
theoretically demonstrated that, measured on the labeled and unlabeled training
data, this semi-supervised procedure never gives a lower quadratic loss than
the supervised alternative. To our knowledge this is the first approach that
offers such strong, albeit conservative, guarantees for improvement over the
supervised solution. The characteristics of our approach are explicated using
benchmark datasets to further understand the similarities and differences
between the quadratic loss criterion used in the theoretical results and the
classification accuracy often considered in practice.Comment: 13 pages, 2 figures, 1 tabl
- …