41,920 research outputs found
Sparse molecular image representation
Sparsity-based models have proven to be very effective in most image processing applications. The notion of sparsity has recently been extended to structured sparsity models where not only the number of components but also their support is important. This paper goes one step further and proposes a new model where signals are composed of a small number of molecules, which are each linear combinations of a few elementary functions in a dictionary. Our model takes into account the energy on the signal components in addition to their support. We study our prior in detail and propose a novel algorithm for sparse coding that permits the appearance of signal dependent versions of the molecules. Our experiments prove the benefits of the new image model in various restoration tasks and confirm the effectiveness of priors that extend sparsity in flexible ways especially in case of inverse problems with low quality data
Semi-Supervised Sparse Coding
Sparse coding approximates the data sample as a sparse linear combination of
some basic codewords and uses the sparse codes as new presentations. In this
paper, we investigate learning discriminative sparse codes by sparse coding in
a semi-supervised manner, where only a few training samples are labeled. By
using the manifold structure spanned by the data set of both labeled and
unlabeled samples and the constraints provided by the labels of the labeled
samples, we learn the variable class labels for all the samples. Furthermore,
to improve the discriminative ability of the learned sparse codes, we assume
that the class labels could be predicted from the sparse codes directly using a
linear classifier. By solving the codebook, sparse codes, class labels and
classifier parameters simultaneously in a unified objective function, we
develop a semi-supervised sparse coding algorithm. Experiments on two
real-world pattern recognition problems demonstrate the advantage of the
proposed methods over supervised sparse coding methods on partially labeled
data sets
Analysis, Visualization, and Transformation of Audio Signals Using Dictionary-based Methods
date-added: 2014-01-07 09:15:58 +0000 date-modified: 2014-01-07 09:15:58 +0000date-added: 2014-01-07 09:15:58 +0000 date-modified: 2014-01-07 09:15:58 +000
Bayesian orthogonal component analysis for sparse representation
This paper addresses the problem of identifying a lower dimensional space
where observed data can be sparsely represented. This under-complete dictionary
learning task can be formulated as a blind separation problem of sparse sources
linearly mixed with an unknown orthogonal mixing matrix. This issue is
formulated in a Bayesian framework. First, the unknown sparse sources are
modeled as Bernoulli-Gaussian processes. To promote sparsity, a weighted
mixture of an atom at zero and a Gaussian distribution is proposed as prior
distribution for the unobserved sources. A non-informative prior distribution
defined on an appropriate Stiefel manifold is elected for the mixing matrix.
The Bayesian inference on the unknown parameters is conducted using a Markov
chain Monte Carlo (MCMC) method. A partially collapsed Gibbs sampler is
designed to generate samples asymptotically distributed according to the joint
posterior distribution of the unknown model parameters and hyperparameters.
These samples are then used to approximate the joint maximum a posteriori
estimator of the sources and mixing matrix. Simulations conducted on synthetic
data are reported to illustrate the performance of the method for recovering
sparse representations. An application to sparse coding on under-complete
dictionary is finally investigated.Comment: Revised version. Accepted to IEEE Trans. Signal Processin
Non-negative matrix factorization with sparseness constraints
Non-negative matrix factorization (NMF) is a recently developed technique for
finding parts-based, linear representations of non-negative data. Although it
has successfully been applied in several applications, it does not always
result in parts-based representations. In this paper, we show how explicitly
incorporating the notion of `sparseness' improves the found decompositions.
Additionally, we provide complete MATLAB code both for standard NMF and for our
extension. Our hope is that this will further the application of these methods
to solving novel data-analysis problems
- âŠ