46 research outputs found
Mutual Exclusivity Loss for Semi-Supervised Deep Learning
In this paper we consider the problem of semi-supervised learning with deep
Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated
on the observation that unlabeled data is cheap and can be used to improve the
accuracy of classifiers. In this paper we propose an unsupervised
regularization term that explicitly forces the classifier's prediction for
multiple classes to be mutually-exclusive and effectively guides the decision
boundary to lie on the low density space between the manifolds corresponding to
different classes of data. Our proposed approach is general and can be used
with any backpropagation-based learning method. We show through different
experiments that our method can improve the object recognition performance of
ConvNets using unlabeled data.Comment: 5 pages, 1 figures, ICIP 201
Zero-bias autoencoders and the benefits of co-adapting features
Regularized training of an autoencoder typically results in hidden unit
biases that take on large negative values. We show that negative biases are a
natural result of using a hidden layer whose responsibility is to both
represent the input data and act as a selection mechanism that ensures sparsity
of the representation. We then show that negative biases impede the learning of
data distributions whose intrinsic dimensionality is high. We also propose a
new activation function that decouples the two roles of the hidden layer and
that allows us to learn representations on data with very high intrinsic
dimensionality, where standard autoencoders typically fail. Since the decoupled
activation function acts like an implicit regularizer, the model can be trained
by minimizing the reconstruction error of training data, without requiring any
additional regularization