376 research outputs found
On the stable recovery of the sparsest overcomplete representations in presence of noise
Let x be a signal to be sparsely decomposed over a redundant dictionary A,
i.e., a sparse coefficient vector s has to be found such that x=As. It is known
that this problem is inherently unstable against noise, and to overcome this
instability, the authors of [Stable Recovery; Donoho et.al., 2006] have
proposed to use an "approximate" decomposition, that is, a decomposition
satisfying ||x - A s|| < \delta, rather than satisfying the exact equality x =
As. Then, they have shown that if there is a decomposition with ||s||_0 <
(1+M^{-1})/2, where M denotes the coherence of the dictionary, this
decomposition would be stable against noise. On the other hand, it is known
that a sparse decomposition with ||s||_0 < spark(A)/2 is unique. In other
words, although a decomposition with ||s||_0 < spark(A)/2 is unique, its
stability against noise has been proved only for highly more restrictive
decompositions satisfying ||s||_0 < (1+M^{-1})/2, because usually (1+M^{-1})/2
<< spark(A)/2.
This limitation maybe had not been very important before, because ||s||_0 <
(1+M^{-1})/2 is also the bound which guaranties that the sparse decomposition
can be found via minimizing the L1 norm, a classic approach for sparse
decomposition. However, with the availability of new algorithms for sparse
decomposition, namely SL0 and Robust-SL0, it would be important to know whether
or not unique sparse decompositions with (1+M^{-1})/2 < ||s||_0 < spark(A)/2
are stable. In this paper, we show that such decompositions are indeed stable.
In other words, we extend the stability bound from ||s||_0 < (1+M^{-1})/2 to
the whole uniqueness range ||s||_0 < spark(A)/2. In summary, we show that "all
unique sparse decompositions are stably recoverable". Moreover, we see that
sparser decompositions are "more stable".Comment: Accepted in IEEE Trans on SP on 4 May 2010. (c) 2010 IEEE. Personal
use of this material is permitted. Permission from IEEE must be obtained for
all other users, including reprinting/republishing this material for
advertising or promotional purposes, creating new collective works for resale
or redistribution to servers or lists, or reuse of any copyrighted components
of this work in other work
A fast approach for overcomplete sparse decomposition based on smoothed L0 norm
In this paper, a fast algorithm for overcomplete sparse decomposition, called
SL0, is proposed. The algorithm is essentially a method for obtaining sparse
solutions of underdetermined systems of linear equations, and its applications
include underdetermined Sparse Component Analysis (SCA), atomic decomposition
on overcomplete dictionaries, compressed sensing, and decoding real field
codes. Contrary to previous methods, which usually solve this problem by
minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm
tries to directly minimize the L0 norm. It is experimentally shown that the
proposed algorithm is about two to three orders of magnitude faster than the
state-of-the-art interior-point LP solvers, while providing the same (or
better) accuracy.Comment: Accepted in IEEE Transactions on Signal Processing. For MATLAB codes,
see (http://ee.sharif.ir/~SLzero). File replaced, because Fig. 5 was missing
erroneousl
Successive Concave Sparsity Approximation for Compressed Sensing
In this paper, based on a successively accuracy-increasing approximation of
the norm, we propose a new algorithm for recovery of sparse vectors
from underdetermined measurements. The approximations are realized with a
certain class of concave functions that aggressively induce sparsity and their
closeness to the norm can be controlled. We prove that the series of
the approximations asymptotically coincides with the and
norms when the approximation accuracy changes from the worst fitting to the
best fitting. When measurements are noise-free, an optimization scheme is
proposed which leads to a number of weighted minimization programs,
whereas, in the presence of noise, we propose two iterative thresholding
methods that are computationally appealing. A convergence guarantee for the
iterative thresholding method is provided, and, for a particular function in
the class of the approximating functions, we derive the closed-form
thresholding operator. We further present some theoretical analyses via the
restricted isometry, null space, and spherical section properties. Our
extensive numerical simulations indicate that the proposed algorithm closely
follows the performance of the oracle estimator for a range of sparsity levels
wider than those of the state-of-the-art algorithms.Comment: Submitted to IEEE Trans. on Signal Processin
On Recovery of Sparse Signals via Minimization
This article considers constrained minimization methods for the
recovery of high dimensional sparse signals in three settings: noiseless,
bounded error and Gaussian noise. A unified and elementary treatment is given
in these noise settings for two minimization methods: the Dantzig
selector and minimization with an constraint. The results of
this paper improve the existing results in the literature by weakening the
conditions and tightening the error bounds. The improvement on the conditions
shows that signals with larger support can be recovered accurately. This paper
also establishes connections between restricted isometry property and the
mutual incoherence property. Some results of Candes, Romberg and Tao (2006) and
Donoho, Elad, and Temlyakov (2006) are extended
Blind Source Separation: the Sparsity Revolution
International audienceOver the last few years, the development of multi-channel sensors motivated interest in methods for the coherent processing of multivariate data. Some specific issues have already been addressed as testified by the wide literature on the so-called blind source separation (BSS) problem. In this context, as clearly emphasized by previous work, it is fundamental that the sources to be retrieved present some quantitatively measurable diversity. Recently, sparsity and morphological diversity have emerged as a novel and effective source of diversity for BSS. We give here some essential insights into the use of sparsity in source separation and we outline the essential role of morphological diversity as being a source of diversity or contrast between the sources. This paper overviews a sparsity-based BSS method coined Generalized Morphological Component Analysis (GMCA) that takes advantages of both morphological diversity and sparsity, using recent sparse overcomplete or redundant signal representations. GMCA is a fast and efficient blind source separation method. In remote sensing applications, the specificity of hyperspectral data should be accounted for. We extend the proposed GMCA framework to deal with hyperspectral data. In a general framework, GMCA provides a basis for multivariate data analysis in the scope of a wide range of classical multivariate data restorate. Numerical results are given in color image denoising and inpainting. Finally, GMCA is applied to the simulated ESA/Planck data. It is shown to give effective astrophysical component separation
- …