922 research outputs found
Unsupervised feature learning with discriminative encoder
In recent years, deep discriminative models have achieved extraordinary
performance on supervised learning tasks, significantly outperforming their
generative counterparts. However, their success relies on the presence of a
large amount of labeled data. How can one use the same discriminative models
for learning useful features in the absence of labels? We address this question
in this paper, by jointly modeling the distribution of data and latent features
in a manner that explicitly assigns zero probability to unobserved data. Rather
than maximizing the marginal probability of observed data, we maximize the
joint probability of the data and the latent features using a two step EM-like
procedure. To prevent the model from overfitting to our initial selection of
latent features, we use adversarial regularization. Depending on the task, we
allow the latent features to be one-hot or real-valued vectors and define a
suitable prior on the features. For instance, one-hot features correspond to
class labels and are directly used for the unsupervised and semi-supervised
classification task, whereas real-valued feature vectors are fed as input to
simple classifiers for auxiliary supervised discrimination tasks. The proposed
model, which we dub discriminative encoder (or DisCoder), is flexible in the
type of latent features that it can capture. The proposed model achieves
state-of-the-art performance on several challenging tasks.Comment: 10 pages, 4 figures, International Conference on Data Mining, 201
Statistical Significance of the Netflix Challenge
Inspired by the legacy of the Netflix contest, we provide an overview of what
has been learned---from our own efforts, and those of others---concerning the
problems of collaborative filtering and recommender systems. The data set
consists of about 100 million movie ratings (from 1 to 5 stars) involving some
480 thousand users and some 18 thousand movies; the associated ratings matrix
is about 99% sparse. The goal is to predict ratings that users will give to
movies; systems which can do this accurately have significant commercial
applications, particularly on the world wide web. We discuss, in some detail,
approaches to "baseline" modeling, singular value decomposition (SVD), as well
as kNN (nearest neighbor) and neural network models; temporal effects,
cross-validation issues, ensemble methods and other considerations are
discussed as well. We compare existing models in a search for new models, and
also discuss the mission-critical issues of penalization and parameter
shrinkage which arise when the dimensions of a parameter space reaches into the
millions. Although much work on such problems has been carried out by the
computer science and machine learning communities, our goal here is to address
a statistical audience, and to provide a primarily statistical treatment of the
lessons that have been learned from this remarkable set of data.Comment: Published in at http://dx.doi.org/10.1214/11-STS368 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …