138 research outputs found
On the stable recovery of the sparsest overcomplete representations in presence of noise
Let x be a signal to be sparsely decomposed over a redundant dictionary A,
i.e., a sparse coefficient vector s has to be found such that x=As. It is known
that this problem is inherently unstable against noise, and to overcome this
instability, the authors of [Stable Recovery; Donoho et.al., 2006] have
proposed to use an "approximate" decomposition, that is, a decomposition
satisfying ||x - A s|| < \delta, rather than satisfying the exact equality x =
As. Then, they have shown that if there is a decomposition with ||s||_0 <
(1+M^{-1})/2, where M denotes the coherence of the dictionary, this
decomposition would be stable against noise. On the other hand, it is known
that a sparse decomposition with ||s||_0 < spark(A)/2 is unique. In other
words, although a decomposition with ||s||_0 < spark(A)/2 is unique, its
stability against noise has been proved only for highly more restrictive
decompositions satisfying ||s||_0 < (1+M^{-1})/2, because usually (1+M^{-1})/2
<< spark(A)/2.
This limitation maybe had not been very important before, because ||s||_0 <
(1+M^{-1})/2 is also the bound which guaranties that the sparse decomposition
can be found via minimizing the L1 norm, a classic approach for sparse
decomposition. However, with the availability of new algorithms for sparse
decomposition, namely SL0 and Robust-SL0, it would be important to know whether
or not unique sparse decompositions with (1+M^{-1})/2 < ||s||_0 < spark(A)/2
are stable. In this paper, we show that such decompositions are indeed stable.
In other words, we extend the stability bound from ||s||_0 < (1+M^{-1})/2 to
the whole uniqueness range ||s||_0 < spark(A)/2. In summary, we show that "all
unique sparse decompositions are stably recoverable". Moreover, we see that
sparser decompositions are "more stable".Comment: Accepted in IEEE Trans on SP on 4 May 2010. (c) 2010 IEEE. Personal
use of this material is permitted. Permission from IEEE must be obtained for
all other users, including reprinting/republishing this material for
advertising or promotional purposes, creating new collective works for resale
or redistribution to servers or lists, or reuse of any copyrighted components
of this work in other work
A fast approach for overcomplete sparse decomposition based on smoothed L0 norm
In this paper, a fast algorithm for overcomplete sparse decomposition, called
SL0, is proposed. The algorithm is essentially a method for obtaining sparse
solutions of underdetermined systems of linear equations, and its applications
include underdetermined Sparse Component Analysis (SCA), atomic decomposition
on overcomplete dictionaries, compressed sensing, and decoding real field
codes. Contrary to previous methods, which usually solve this problem by
minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm
tries to directly minimize the L0 norm. It is experimentally shown that the
proposed algorithm is about two to three orders of magnitude faster than the
state-of-the-art interior-point LP solvers, while providing the same (or
better) accuracy.Comment: Accepted in IEEE Transactions on Signal Processing. For MATLAB codes,
see (http://ee.sharif.ir/~SLzero). File replaced, because Fig. 5 was missing
erroneousl
A First Step to Convolutive Sparse Representation
In this paper an extension of the sparse decomposition problem is considered
and an algorithm for solving it is presented. In this extension, it is known
that one of the shifted versions of a signal s (not necessarily the original
signal itself) has a sparse representation on an overcomplete dictionary, and
we are looking for the sparsest representation among the representations of all
the shifted versions of s. Then, the proposed algorithm finds simultaneously
the amount of the required shift, and the sparse representation. Experimental
results emphasize on the performance of our algorithm.Comment: 4 Pages-In Proceeding of ICASSP 200
Recovery of Low-Rank Matrices under Affine Constraints via a Smoothed Rank Function
In this paper, the problem of matrix rank minimization under affine
constraints is addressed. The state-of-the-art algorithms can recover matrices
with a rank much less than what is sufficient for the uniqueness of the
solution of this optimization problem. We propose an algorithm based on a
smooth approximation of the rank function, which practically improves recovery
limits on the rank of the solution. This approximation leads to a non-convex
program; thus, to avoid getting trapped in local solutions, we use the
following scheme. Initially, a rough approximation of the rank function subject
to the affine constraints is optimized. As the algorithm proceeds, finer
approximations of the rank are optimized and the solver is initialized with the
solution of the previous approximation until reaching the desired accuracy.
On the theoretical side, benefiting from the spherical section property, we
will show that the sequence of the solutions of the approximating function
converges to the minimum rank solution. On the experimental side, it will be
shown that the proposed algorithm, termed SRF standing for Smoothed Rank
Function, can recover matrices which are unique solutions of the rank
minimization problem and yet not recoverable by nuclear norm minimization.
Furthermore, it will be demonstrated that, in completing partially observed
matrices, the accuracy of SRF is considerably and consistently better than some
famous algorithms when the number of revealed entries is close to the minimum
number of parameters that uniquely represent a low-rank matrix.Comment: Accepted in IEEE TSP on December 4th, 201
On the error of estimating the sparsest solution of underdetermined linear systems
Let A be an n by m matrix with m>n, and suppose that the underdetermined
linear system As=x admits a sparse solution s0 for which ||s0||_0 < 1/2
spark(A). Such a sparse solution is unique due to a well-known uniqueness
theorem. Suppose now that we have somehow a solution s_hat as an estimation of
s0, and suppose that s_hat is only `approximately sparse', that is, many of its
components are very small and nearly zero, but not mathematically equal to
zero. Is such a solution necessarily close to the true sparsest solution? More
generally, is it possible to construct an upper bound on the estimation error
||s_hat-s0||_2 without knowing s0? The answer is positive, and in this paper we
construct such a bound based on minimal singular values of submatrices of A. We
will also state a tight bound, which is more complicated, but besides being
tight, enables us to study the case of random dictionaries and obtain
probabilistic upper bounds. We will also study the noisy case, that is, where
x=As+n. Moreover, we will see that where ||s0||_0 grows, to obtain a
predetermined guaranty on the maximum of ||s_hat-s0||_2, s_hat is needed to be
sparse with a better approximation. This can be seen as an explanation to the
fact that the estimation quality of sparse recovery algorithms degrades where
||s0||_0 grows.Comment: To appear in December 2011 issue of IEEE Transactions on Information
Theor
- …