185,482 research outputs found
A linear approach for sparse coding by a two-layer neural network
Many approaches to transform classification problems from non-linear to
linear by feature transformation have been recently presented in the
literature. These notably include sparse coding methods and deep neural
networks. However, many of these approaches require the repeated application of
a learning process upon the presentation of unseen data input vectors, or else
involve the use of large numbers of parameters and hyper-parameters, which must
be chosen through cross-validation, thus increasing running time dramatically.
In this paper, we propose and experimentally investigate a new approach for the
purpose of overcoming limitations of both kinds. The proposed approach makes
use of a linear auto-associative network (called SCNN) with just one hidden
layer. The combination of this architecture with a specific error function to
be minimized enables one to learn a linear encoder computing a sparse code
which turns out to be as similar as possible to the sparse coding that one
obtains by re-training the neural network. Importantly, the linearity of SCNN
and the choice of the error function allow one to achieve reduced running time
in the learning phase. The proposed architecture is evaluated on the basis of
two standard machine learning tasks. Its performances are compared with those
of recently proposed non-linear auto-associative neural networks. The overall
results suggest that linear encoders can be profitably used to obtain sparse
data representations in the context of machine learning problems, provided that
an appropriate error function is used during the learning phase
Analyzing sparse dictionaries for online learning with kernels
Many signal processing and machine learning methods share essentially the
same linear-in-the-parameter model, with as many parameters as available
samples as in kernel-based machines. Sparse approximation is essential in many
disciplines, with new challenges emerging in online learning with kernels. To
this end, several sparsity measures have been proposed in the literature to
quantify sparse dictionaries and constructing relevant ones, the most prolific
ones being the distance, the approximation, the coherence and the Babel
measures. In this paper, we analyze sparse dictionaries based on these
measures. By conducting an eigenvalue analysis, we show that these sparsity
measures share many properties, including the linear independence condition and
inducing a well-posed optimization problem. Furthermore, we prove that there
exists a quasi-isometry between the parameter (i.e., dual) space and the
dictionary's induced feature space.Comment: 10 page
Collaborative Filtering via Group-Structured Dictionary Learning
Structured sparse coding and the related structured dictionary learning
problems are novel research areas in machine learning. In this paper we present
a new application of structured dictionary learning for collaborative filtering
based recommender systems. Our extensive numerical experiments demonstrate that
the presented technique outperforms its state-of-the-art competitors and has
several advantages over approaches that do not put structured constraints on
the dictionary elements.Comment: A compressed version of the paper has been accepted for publication
at the 10th International Conference on Latent Variable Analysis and Source
Separation (LVA/ICA 2012
- …