24,139 research outputs found
Collaborative Hierarchical Sparse Modeling
Sparse modeling is a powerful framework for data analysis and processing.
Traditionally, encoding in this framework is done by solving an l_1-regularized
linear regression problem, usually called Lasso. In this work we first combine
the sparsity-inducing property of the Lasso model, at the individual feature
level, with the block-sparsity property of the group Lasso model, where sparse
groups of features are jointly encoded, obtaining a sparsity pattern
hierarchically structured. This results in the hierarchical Lasso, which shows
important practical modeling advantages. We then extend this approach to the
collaborative case, where a set of simultaneously coded signals share the same
sparsity pattern at the higher (group) level but not necessarily at the lower
one. Signals then share the same active groups, or classes, but not necessarily
the same active set. This is very well suited for applications such as source
separation. An efficient optimization procedure, which guarantees convergence
to the global optimum, is developed for these new models. The underlying
presentation of the new framework and optimization approach is complemented
with experimental examples and preliminary theoretical results.Comment: To appear in CISS 201
C-HiLasso: A Collaborative Hierarchical Sparse Modeling Framework
Sparse modeling is a powerful framework for data analysis and processing.
Traditionally, encoding in this framework is performed by solving an
L1-regularized linear regression problem, commonly referred to as Lasso or
Basis Pursuit. In this work we combine the sparsity-inducing property of the
Lasso model at the individual feature level, with the block-sparsity property
of the Group Lasso model, where sparse groups of features are jointly encoded,
obtaining a sparsity pattern hierarchically structured. This results in the
Hierarchical Lasso (HiLasso), which shows important practical modeling
advantages. We then extend this approach to the collaborative case, where a set
of simultaneously coded signals share the same sparsity pattern at the higher
(group) level, but not necessarily at the lower (inside the group) level,
obtaining the collaborative HiLasso model (C-HiLasso). Such signals then share
the same active groups, or classes, but not necessarily the same active set.
This model is very well suited for applications such as source identification
and separation. An efficient optimization procedure, which guarantees
convergence to the global optimum, is developed for these new models. The
underlying presentation of the new framework and optimization approach is
complemented with experimental examples and theoretical results regarding
recovery guarantees for the proposed models
Collaborative sources identification in mixed signals via hierarchical sparse modeling
A collaborative framework for detecting the different sources in mixed signals is presented in this paper. The approach is based on C-HiLasso, a convex collaborative hierarchical sparse model, and proceeds as follows. First, we build a structured dictionary for mixed signals by concatenating a set of sub-dictionaries, each one of them learned to sparsely model one of a set of possible classes. Then, the coding of the mixed signal is performed by efficiently solving a convex optimization problem that combines standard sparsity with group and collaborative sparsity. The present sources are identified by looking at the sub-dictionaries automatically selected in the coding. The collaborative filtering in C-HiLasso takes advantage of the temporal/spatial redundancy in the mixed signals, letting collections of samples collaborate in identifying the classes, while allowing individual samples to have different internal sparse representations. This collaboration is critical to further stabilize the sparse representation of signals, in particular the class/sub-dictionary selection. The internal sparsity inside the sub-dictionaries, as naturally incorporated by the hierarchical aspects of C-HiLasso, is critical to make the model consistent with the essence of the sub-dictionaries that have been trained for sparse representation of each individual class. We present applications from speaker and instrument identification and texture separation. In the case of audio signals, we use sparse modeling to describe the short-term power spectrum envelopes of harmonic sounds. The proposed pitch independent method automatically detects the number of sources on a recording
Hierarchical Compound Poisson Factorization
Non-negative matrix factorization models based on a hierarchical
Gamma-Poisson structure capture user and item behavior effectively in extremely
sparse data sets, making them the ideal choice for collaborative filtering
applications. Hierarchical Poisson factorization (HPF) in particular has proved
successful for scalable recommendation systems with extreme sparsity. HPF,
however, suffers from a tight coupling of sparsity model (absence of a rating)
and response model (the value of the rating), which limits the expressiveness
of the latter. Here, we introduce hierarchical compound Poisson factorization
(HCPF) that has the favorable Gamma-Poisson structure and scalability of HPF to
high-dimensional extremely sparse matrices. More importantly, HCPF decouples
the sparsity model from the response model, allowing us to choose the most
suitable distribution for the response. HCPF can capture binary, non-negative
discrete, non-negative continuous, and zero-inflated continuous responses. We
compare HCPF with HPF on nine discrete and three continuous data sets and
conclude that HCPF captures the relationship between sparsity and response
better than HPF.Comment: Will appear on Proceedings of the 33 rd International Conference on
Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 4
Multi-task Image Classification via Collaborative, Hierarchical Spike-and-Slab Priors
Promising results have been achieved in image classification problems by
exploiting the discriminative power of sparse representations for
classification (SRC). Recently, it has been shown that the use of
\emph{class-specific} spike-and-slab priors in conjunction with the
class-specific dictionaries from SRC is particularly effective in low training
scenarios. As a logical extension, we build on this framework for multitask
scenarios, wherein multiple representations of the same physical phenomena are
available. We experimentally demonstrate the benefits of mining joint
information from different camera views for multi-view face recognition.Comment: Accepted to International Conference in Image Processing (ICIP) 201
Collaborative Deep Learning for Recommender Systems
Collaborative filtering (CF) is a successful approach commonly used by many
recommender systems. Conventional CF-based methods use the ratings given to
items by users as the sole source of information for learning to make
recommendation. However, the ratings are often very sparse in many
applications, causing CF-based methods to degrade significantly in their
recommendation performance. To address this sparsity problem, auxiliary
information such as item content information may be utilized. Collaborative
topic regression (CTR) is an appealing recent method taking this approach which
tightly couples the two components that learn from two different sources of
information. Nevertheless, the latent representation learned by CTR may not be
very effective when the auxiliary information is very sparse. To address this
problem, we generalize recent advances in deep learning from i.i.d. input to
non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian
model called collaborative deep learning (CDL), which jointly performs deep
representation learning for the content information and collaborative filtering
for the ratings (feedback) matrix. Extensive experiments on three real-world
datasets from different domains show that CDL can significantly advance the
state of the art
- …