1,111 research outputs found
Recommended from our members
SparseLab Architecture
Changes and Enhancements for Release 2.0: 4 papers have been added to SparseLab 2.0: "Fast Solution of l1-norm Minimization Problems When the Solutions May be Sparse"; "Why Simple Shrinkage is Still Relevant For Redundant Representations"; "Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise"; "On the Stability of Basis Pursuit in the Presence of Noise." This document describes the architecture of SparseLab version 2.0. It is designed for users who already have had day-to-day interaction with the package and now need specific details about the architecture of the package, for example to modify components for their own research
On the stable recovery of the sparsest overcomplete representations in presence of noise
Let x be a signal to be sparsely decomposed over a redundant dictionary A,
i.e., a sparse coefficient vector s has to be found such that x=As. It is known
that this problem is inherently unstable against noise, and to overcome this
instability, the authors of [Stable Recovery; Donoho et.al., 2006] have
proposed to use an "approximate" decomposition, that is, a decomposition
satisfying ||x - A s|| < \delta, rather than satisfying the exact equality x =
As. Then, they have shown that if there is a decomposition with ||s||_0 <
(1+M^{-1})/2, where M denotes the coherence of the dictionary, this
decomposition would be stable against noise. On the other hand, it is known
that a sparse decomposition with ||s||_0 < spark(A)/2 is unique. In other
words, although a decomposition with ||s||_0 < spark(A)/2 is unique, its
stability against noise has been proved only for highly more restrictive
decompositions satisfying ||s||_0 < (1+M^{-1})/2, because usually (1+M^{-1})/2
<< spark(A)/2.
This limitation maybe had not been very important before, because ||s||_0 <
(1+M^{-1})/2 is also the bound which guaranties that the sparse decomposition
can be found via minimizing the L1 norm, a classic approach for sparse
decomposition. However, with the availability of new algorithms for sparse
decomposition, namely SL0 and Robust-SL0, it would be important to know whether
or not unique sparse decompositions with (1+M^{-1})/2 < ||s||_0 < spark(A)/2
are stable. In this paper, we show that such decompositions are indeed stable.
In other words, we extend the stability bound from ||s||_0 < (1+M^{-1})/2 to
the whole uniqueness range ||s||_0 < spark(A)/2. In summary, we show that "all
unique sparse decompositions are stably recoverable". Moreover, we see that
sparser decompositions are "more stable".Comment: Accepted in IEEE Trans on SP on 4 May 2010. (c) 2010 IEEE. Personal
use of this material is permitted. Permission from IEEE must be obtained for
all other users, including reprinting/republishing this material for
advertising or promotional purposes, creating new collective works for resale
or redistribution to servers or lists, or reuse of any copyrighted components
of this work in other work
Sparse and spurious: dictionary learning with noise and outliers
A popular approach within the signal processing and machine learning
communities consists in modelling signals as sparse linear combinations of
atoms selected from a learned dictionary. While this paradigm has led to
numerous empirical successes in various fields ranging from image to audio
processing, there have only been a few theoretical arguments supporting these
evidences. In particular, sparse coding, or sparse dictionary learning, relies
on a non-convex procedure whose local minima have not been fully analyzed yet.
In this paper, we consider a probabilistic model of sparse signals, and show
that, with high probability, sparse coding admits a local minimum around the
reference dictionary generating the signals. Our study takes into account the
case of over-complete dictionaries, noisy signals, and possible outliers, thus
extending previous work limited to noiseless settings and/or under-complete
dictionaries. The analysis we conduct is non-asymptotic and makes it possible
to understand how the key quantities of the problem, such as the coherence or
the level of noise, can scale with respect to the dimension of the signals, the
number of atoms, the sparsity and the number of observations.Comment: This is a substantially revised version of a first draft that
appeared as a preprint titled "Local stability and robustness of sparse
dictionary learning in the presence of noise",
http://hal.inria.fr/hal-00737152, IEEE Transactions on Information Theory,
Institute of Electrical and Electronics Engineers (IEEE), 2015, pp.2
Knowledge-aided covariance matrix estimation and adaptive detection in compound-Gaussian noise
We address the problem of adaptive detection of a signal of interest embedded in colored noise modeled in terms of a compound-Gaussian process. The covariance matrices of the primary and the secondary data share a common structure while having different power levels. A Bayesian approach is proposed here, where both the power levels and the structure are assumed to be random, with some appropriate distributions. Within this framework we propose MMSE and MAP estimators of the covariance structure and their application to adaptive detection using the NMF test statistic and an optimized GLRT herein derived. Some results, also conducted in comparison with existing algorithms, are presented to illustrate the performances of the proposed algorithms. The relevant result is that the solutions presented herein allows to improve the performance over conventional ones, especially in presence of a small number of training data
- …