14,885 research outputs found
Collaborative Filtering via Group-Structured Dictionary Learning
Structured sparse coding and the related structured dictionary learning
problems are novel research areas in machine learning. In this paper we present
a new application of structured dictionary learning for collaborative filtering
based recommender systems. Our extensive numerical experiments demonstrate that
the presented technique outperforms its state-of-the-art competitors and has
several advantages over approaches that do not put structured constraints on
the dictionary elements.Comment: A compressed version of the paper has been accepted for publication
at the 10th International Conference on Latent Variable Analysis and Source
Separation (LVA/ICA 2012
Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity
A general framework for solving image inverse problems is introduced in this
paper. The approach is based on Gaussian mixture models, estimated via a
computationally efficient MAP-EM algorithm. A dual mathematical interpretation
of the proposed framework with structured sparse estimation is described, which
shows that the resulting piecewise linear estimate stabilizes the estimation
when compared to traditional sparse inverse problem techniques. This
interpretation also suggests an effective dictionary motivated initialization
for the MAP-EM algorithm. We demonstrate that in a number of image inverse
problems, including inpainting, zooming, and deblurring, the same algorithm
produces either equal, often significantly better, or very small margin worse
results than the best published ones, at a lower computational cost.Comment: 30 page
Collaborative Hierarchical Sparse Modeling
Sparse modeling is a powerful framework for data analysis and processing.
Traditionally, encoding in this framework is done by solving an l_1-regularized
linear regression problem, usually called Lasso. In this work we first combine
the sparsity-inducing property of the Lasso model, at the individual feature
level, with the block-sparsity property of the group Lasso model, where sparse
groups of features are jointly encoded, obtaining a sparsity pattern
hierarchically structured. This results in the hierarchical Lasso, which shows
important practical modeling advantages. We then extend this approach to the
collaborative case, where a set of simultaneously coded signals share the same
sparsity pattern at the higher (group) level but not necessarily at the lower
one. Signals then share the same active groups, or classes, but not necessarily
the same active set. This is very well suited for applications such as source
separation. An efficient optimization procedure, which guarantees convergence
to the global optimum, is developed for these new models. The underlying
presentation of the new framework and optimization approach is complemented
with experimental examples and preliminary theoretical results.Comment: To appear in CISS 201
- …