167 research outputs found

    Polar Polytopes and Recovery of Sparse Representations

    Get PDF
    Suppose we have a signal y which we wish to represent using a linear combination of a number of basis atoms a_i, y=sum_i x_i a_i = Ax. The problem of finding the minimum L0 norm representation for y is a hard problem. The Basis Pursuit (BP) approach proposes to find the minimum L1 norm representation instead, which corresponds to a linear program (LP) that can be solved using modern LP techniques, and several recent authors have given conditions for the BP (minimum L1 norm) and sparse (minimum L0 solutions) representations to be identical. In this paper, we explore this sparse representation problem} using the geometry of convex polytopes, as recently introduced into the field by Donoho. By considering the dual LP we find that the so-called polar polytope P of the centrally-symmetric polytope P whose vertices are the atom pairs +-a_i is particularly helpful in providing us with geometrical insight into optimality conditions given by Fuchs and Tropp for non-unit-norm atom sets. In exploring this geometry we are able to tighten some of these earlier results, showing for example that the Fuchs condition is both necessary and sufficient for L1-unique-optimality, and that there are situations where Orthogonal Matching Pursuit (OMP) can eventually find all L1-unique-optimal solutions with m nonzeros even if ERC fails for m, if allowed to run for more than m steps

    On Theorem 10 in "On Polar Polytopes and the Recovery of Sparse Representations" (vol 50, pg 2231, 2004)

    Get PDF
    (c)2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Published version: IEEE Transactions on Information Theory 59 (8): 5206-5209, Aug 2013. doi:10.1109/TIT.2013.225929

    Group polytope faces pursuit for recovery of block-sparse signals

    Get PDF
    This is the accepted version of the article. The final publication is available at link.springer.com. http://www.springerlink.com/content/e0r61416446277w0

    BEHAVIOR OF GREEDY SPARSE REPRESENTATION ALGORITHMS ON NESTED SUPPORTS

    Get PDF
    © 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This article was accepted for publication by IEEE in: Proc IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2013), Vancouver, Canada, 26-31 May 2013. pp. 5710-5714

    Geometric approach to error correcting codes and reconstruction of signals

    Full text link
    We develop an approach through geometric functional analysis to error correcting codes and to reconstruction of signals from few linear measurements. An error correcting code encodes an n-letter word x into an m-letter word y in such a way that x can be decoded correctly when any r letters of y are corrupted. We prove that most linear orthogonal transformations Q from R^n into R^m form efficient and robust robust error correcting codes over reals. The decoder (which corrects the corrupted components of y) is the metric projection onto the range of Q in the L_1 norm. An equivalent problem arises in signal processing: how to reconstruct a signal that belongs to a small class from few linear measurements? We prove that for most sets of Gaussian measurements, all signals of small support can be exactly reconstructed by the L_1 norm minimization. This is a substantial improvement of recent results of Donoho and of Candes and Tao. An equivalent problem in combinatorial geometry is the existence of a polytope with fixed number of facets and maximal number of lower-dimensional facets. We prove that most sections of the cube form such polytopes.Comment: 17 pages, 3 figure

    On Modified l_1-Minimization Problems in Compressed Sensing

    Get PDF
    Sparse signal modeling has received much attention recently because of its application in medical imaging, group testing and radar technology, among others. Compressed sensing, a recently coined term, has showed us, both in theory and practice, that various signals of interest which are sparse or approximately sparse can be efficiently recovered by using far fewer samples than suggested by Shannon sampling theorem. Sparsity is the only prior information about an unknown signal assumed in traditional compressed sensing techniques. But in many applications, other kinds of prior information are also available, such as partial knowledge of the support, tree structure of signal and clustering of large coefficients around a small set of coefficients. In this thesis, we consider compressed sensing problems with prior information on the support of the signal, together with sparsity. We modify regular l_1 -minimization problems considered in compressed sensing, using this extra information. We call these modified l_1 -minimization problems. We show that partial knowledge of the support helps us to weaken sufficient conditions for the recovery of sparse signals using modified ` 1 minimization problems. In case of deterministic compressed sensing, we show that a sharp condition for sparse recovery can be improved using modified ` 1 minimization problems. We also derive algebraic necessary and sufficient condition for modified basis pursuit problem and use an open source algorithm known as l_1 -homotopy algorithm to perform some numerical experiments and compare the performance of modified Basis Pursuit Denoising with the regular Basis Pursuit Denoising

    An evaluation of the sparsity degree for sparse recovery with deterministic measurement matrices

    Get PDF
    International audienceThe paper deals with the estimation of the maximal sparsity degree for which a given measurement matrix allows sparse reconstruction through l1-minimization. This problem is a key issue in different applications featuring particular types of measurement matrices, as for instance in the framework of tomography with low number of views. In this framework, while the exact bound is NP hard to compute, most classical criteria guarantee lower bounds that are numerically too pessimistic. In order to achieve an accurate estimation, we propose an efficient greedy algorithm that provides an upper bound for this maximal sparsity. Based on polytope theory, the algorithm consists in finding sparse vectors that cannot be recovered by l1-minimization. Moreover, in order to deal with noisy measurements, theoretical conditions leading to a more restrictive but reasonable bounds are investigated. Numerical results are presented for discrete versions of tomo\-graphy measurement matrices, which are stacked Radon transforms corresponding to different tomograph views
    corecore