589 research outputs found

    On sharp performance bounds for robust sparse signal recoveries

    Get PDF
    It is well known in compressive sensing that l_1 minimization can recover the sparsest solution for a large class of underdetermined systems of linear equations, provided the signal is sufficiently sparse. In this paper, we compute sharp performance bounds for several different notions of robustness in sparse signal recovery via l_1 minimization. In particular, we determine necessary and sufficient conditions for the measurement matrix A under which l_1 minimization guarantees the robustness of sparse signal recovery in the "weak", "sectional" and "strong" (e.g., robustness for "almost all" approximately sparse signals, or instead for "all" approximately sparse signals). Based on these characterizations, we are able to compute sharp performance bounds on the tradeoff between signal sparsity and signal recovery robustness in these various senses. Our results are based on a high-dimensional geometrical analysis of the null-space of the measurement matrix A. These results generalize the thresholds results for purely sparse signals and also present generalized insights on l_1 minimization for recovering purely sparse signals from a null-space perspective

    Compressed sensing with combinatorial designs: theory and simulations

    Full text link
    In 'An asymptotic result on compressed sensing matrices', a new construction for compressed sensing matrices using combinatorial design theory was introduced. In this paper, we use deterministic and probabilistic methods to analyse the performance of matrices obtained from this construction. We provide new theoretical results and detailed simulations. These simulations indicate that the construction is competitive with Gaussian random matrices, and that recovery is tolerant to noise. A new recovery algorithm tailored to the construction is also given.Comment: 18 pages, 3 figure

    Accuracy guarantees for L1-recovery

    Full text link
    We discuss two new methods of recovery of sparse signals from noisy observation based on 1\ell_1- minimization. They are closely related to the well-known techniques such as Lasso and Dantzig Selector. However, these estimators come with efficiently verifiable guaranties of performance. By optimizing these bounds with respect to the method parameters we are able to construct the estimators which possess better statistical properties than the commonly used ones. We also show how these techniques allow to provide efficiently computable accuracy bounds for Lasso and Dantzig Selector. We link our performance estimations to the well known results of Compressive Sensing and justify our proposed approach with an oracle inequality which links the properties of the recovery algorithms and the best estimation performance when the signal support is known. We demonstrate how the estimates can be computed using the Non-Euclidean Basis Pursuit algorithm

    Computational Complexity versus Statistical Performance on Sparse Recovery Problems

    Get PDF
    We show that several classical quantities controlling compressed sensing performance directly match classical parameters controlling algorithmic complexity. We first describe linearly convergent restart schemes on first-order methods solving a broad range of compressed sensing problems, where sharpness at the optimum controls convergence speed. We show that for sparse recovery problems, this sharpness can be written as a condition number, given by the ratio between true signal sparsity and the largest signal size that can be recovered by the observation matrix. In a similar vein, Renegar's condition number is a data-driven complexity measure for convex programs, generalizing classical condition numbers for linear systems. We show that for a broad class of compressed sensing problems, the worst case value of this algorithmic complexity measure taken over all signals matches the restricted singular value of the observation matrix which controls robust recovery performance. Overall, this means in both cases that, in compressed sensing problems, a single parameter directly controls both computational complexity and recovery performance. Numerical experiments illustrate these points using several classical algorithms.Comment: Final version, to appear in information and Inferenc

    Compressive Sensing over the Grassmann Manifold: a Unified Geometric Framework

    Get PDF
    ℓ_1 minimization is often used for finding the sparse solutions of an under-determined linear system. In this paper we focus on finding sharp performance bounds on recovering approximately sparse signals using ℓ_1 minimization, possibly under noisy measurements. While the restricted isometry property is powerful for the analysis of recovering approximately sparse signals with noisy measurements, the known bounds on the achievable sparsity (The "sparsity" in this paper means the size of the set of nonzero or significant elements in a signal vector.) level can be quite loose. The neighborly polytope analysis which yields sharp bounds for ideally sparse signals cannot be readily generalized to approximately sparse signals. Starting from a necessary and sufficient condition, the "balancedness" property of linear subspaces, for achieving a certain signal recovery accuracy, we give a unified null space Grassmann angle-based geometric framework for analyzing the performance of ℓ_1 minimization. By investigating the "balancedness" property, this unified framework characterizes sharp quantitative tradeoffs between the considered sparsity and the recovery accuracy of the ℓ_1 optimization. As a consequence, this generalizes the neighborly polytope result for ideally sparse signals. Besides the robustness in the "strong" sense for all sparse signals, we also discuss the notions of "weak" and "sectional" robustness. Our results concern fundamental properties of linear subspaces and so may be of independent mathematical interest

    Dense Error Correction via L1-Minimization

    Get PDF
    This paper studies the problem of recovering a non-negative sparse signal \x \in \Re^n from highly corrupted linear measurements \y = A\x + \e \in \Re^m, where \e is an unknown error vector whose nonzero entries may be unbounded. Motivated by an observation from face recognition in computer vision, this paper proves that for highly correlated (and possibly overcomplete) dictionaries AA, any non-negative, sufficiently sparse signal \x can be recovered by solving an 1\ell^1-minimization problem: \min \|\x\|_1 + \|\e\|_1 \quad {subject to} \quad \y = A\x + \e. More precisely, if the fraction ρ\rho of errors is bounded away from one and the support of \x grows sublinearly in the dimension mm of the observation, then as mm goes to infinity, the above 1\ell^1-minimization succeeds for all signals \x and almost all sign-and-support patterns of \e. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100% of the observations corrupted. The proof relies on a careful characterization of the faces of a convex polytope spanned together by the standard crosspolytope and a set of iid Gaussian vectors with nonzero mean and small variance, which we call the ``cross-and-bouquet'' model. Simulations and experimental results corroborate the findings, and suggest extensions to the result.Comment: 40 pages, 9 figure
    corecore