9,744 research outputs found
Online Matrix Completion with Side Information
We give an online algorithm and prove novel mistake and regret bounds for
online binary matrix completion with side information. The mistake bounds we
prove are of the form . The term is
analogous to the usual margin term in SVM (perceptron) bounds. More
specifically, if we assume that there is some factorization of the underlying
matrix into where the rows of are interpreted
as "classifiers" in and the rows of as "instances" in
, then is the maximum (normalized) margin over all
factorizations consistent with the observed matrix. The
quasi-dimension term measures the quality of side information. In the
presence of vacuous side information, . However, if the side
information is predictive of the underlying factorization of the matrix, then
in an ideal case, where is the number of distinct row
factors and is the number of distinct column factors. We additionally
provide a generalization of our algorithm to the inductive setting. In this
setting, we provide an example where the side information is not directly
specified in advance. For this example, the quasi-dimension is now bounded
by
Generalized Low Rank Models
Principal components analysis (PCA) is a well-known technique for
approximating a tabular data set by a low rank matrix. Here, we extend the idea
of PCA to handle arbitrary data sets consisting of numerical, Boolean,
categorical, ordinal, and other data types. This framework encompasses many
well known techniques in data analysis, such as nonnegative matrix
factorization, matrix completion, sparse and robust PCA, -means, -SVD,
and maximum margin matrix factorization. The method handles heterogeneous data
sets, and leads to coherent schemes for compressing, denoising, and imputing
missing entries across all data types simultaneously. It also admits a number
of interesting interpretations of the low rank factors, which allow clustering
of examples or of features. We propose several parallel algorithms for fitting
generalized low rank models, and describe implementations and numerical
results.Comment: 84 pages, 19 figure
Data augmentation for recommender system: A semi-supervised approach using maximum margin matrix factorization
Collaborative filtering (CF) has become a popular method for developing
recommender systems (RS) where ratings of a user for new items is predicted
based on her past preferences and available preference information of other
users. Despite the popularity of CF-based methods, their performance is often
greatly limited by the sparsity of observed entries. In this study, we explore
the data augmentation and refinement aspects of Maximum Margin Matrix
Factorization (MMMF), a widely accepted CF technique for the rating
predictions, which have not been investigated before. We exploit the inherent
characteristics of CF algorithms to assess the confidence level of individual
ratings and propose a semi-supervised approach for rating augmentation based on
self-training. We hypothesize that any CF algorithm's predictions with low
confidence are due to some deficiency in the training data and hence, the
performance of the algorithm can be improved by adopting a systematic data
augmentation strategy. We iteratively use some of the ratings predicted with
high confidence to augment the training data and remove low-confidence entries
through a refinement process. By repeating this process, the system learns to
improve prediction accuracy. Our method is experimentally evaluated on several
state-of-the-art CF algorithms and leads to informative rating augmentation,
improving the performance of the baseline approaches.Comment: 20 page
Collaborative Filtering via Ensembles of Matrix Factorizations
We present a Matrix Factorization(MF) based approach for the Netflix Prize competition. Currently MF based algorithms are popular and have proved successful for collaborative filtering tasks. For the Netflix Prize competition, we adopt three different types of MF algorithms: regularized MF, maximum margin MF and non-negative MF. Furthermore, for each MF algorithm, instead of selecting the optimal parameters, we combine the results obtained with several parameters. With this method, we achieve a performance that is more than 6 better than the Netflix's own system
Bayesian Matrix Completion via Adaptive Relaxed Spectral Regularization
Bayesian matrix completion has been studied based on a low-rank matrix
factorization formulation with promising results. However, little work has been
done on Bayesian matrix completion based on the more direct spectral
regularization formulation. We fill this gap by presenting a novel Bayesian
matrix completion method based on spectral regularization. In order to
circumvent the difficulties of dealing with the orthonormality constraints of
singular vectors, we derive a new equivalent form with relaxed constraints,
which then leads us to design an adaptive version of spectral regularization
feasible for Bayesian inference. Our Bayesian method requires no parameter
tuning and can infer the number of latent factors automatically. Experiments on
synthetic and real datasets demonstrate encouraging results on rank recovery
and collaborative filtering, with notably good results for very sparse
matrices.Comment: Accepted to AAAI 201
- …