602 research outputs found
On the error of estimating the sparsest solution of underdetermined linear systems
Let A be an n by m matrix with m>n, and suppose that the underdetermined
linear system As=x admits a sparse solution s0 for which ||s0||_0 < 1/2
spark(A). Such a sparse solution is unique due to a well-known uniqueness
theorem. Suppose now that we have somehow a solution s_hat as an estimation of
s0, and suppose that s_hat is only `approximately sparse', that is, many of its
components are very small and nearly zero, but not mathematically equal to
zero. Is such a solution necessarily close to the true sparsest solution? More
generally, is it possible to construct an upper bound on the estimation error
||s_hat-s0||_2 without knowing s0? The answer is positive, and in this paper we
construct such a bound based on minimal singular values of submatrices of A. We
will also state a tight bound, which is more complicated, but besides being
tight, enables us to study the case of random dictionaries and obtain
probabilistic upper bounds. We will also study the noisy case, that is, where
x=As+n. Moreover, we will see that where ||s0||_0 grows, to obtain a
predetermined guaranty on the maximum of ||s_hat-s0||_2, s_hat is needed to be
sparse with a better approximation. This can be seen as an explanation to the
fact that the estimation quality of sparse recovery algorithms degrades where
||s0||_0 grows.Comment: To appear in December 2011 issue of IEEE Transactions on Information
Theor
A fast approach for overcomplete sparse decomposition based on smoothed L0 norm
In this paper, a fast algorithm for overcomplete sparse decomposition, called
SL0, is proposed. The algorithm is essentially a method for obtaining sparse
solutions of underdetermined systems of linear equations, and its applications
include underdetermined Sparse Component Analysis (SCA), atomic decomposition
on overcomplete dictionaries, compressed sensing, and decoding real field
codes. Contrary to previous methods, which usually solve this problem by
minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm
tries to directly minimize the L0 norm. It is experimentally shown that the
proposed algorithm is about two to three orders of magnitude faster than the
state-of-the-art interior-point LP solvers, while providing the same (or
better) accuracy.Comment: Accepted in IEEE Transactions on Signal Processing. For MATLAB codes,
see (http://ee.sharif.ir/~SLzero). File replaced, because Fig. 5 was missing
erroneousl
Applications of sparse approximation in communications
Sparse approximation problems abound in many scientific, mathematical, and engineering applications. These problems are defined by two competing notions: we approximate a signal vector as a linear combination of elementary atoms and we require that the approximation be both as accurate and as concise as possible. We introduce two natural and direct applications of these problems and algorithmic solutions in communications. We do so by constructing enhanced codebooks from base codebooks. We show that we can decode these enhanced codebooks in the presence of Gaussian noise. For MIMO wireless communication channels, we construct simultaneous sparse approximation problems and demonstrate that our algorithms can both decode the transmitted signals and estimate the channel parameters
- …