4,086 research outputs found

    Sharp thresholds for high-dimensional and noisy recovery of sparsity

    Full text link
    The problem of consistently estimating the sparsity pattern of a vector \betastar \in \real^\mdim based on observations contaminated by noise arises in various contexts, including subset selection in regression, structure estimation in graphical models, sparse approximation, and signal denoising. We analyze the behavior of ℓ1\ell_1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish a sharp relation between the problem dimension \mdim, the number \spindex of non-zero elements in \betastar, and the number of observations \numobs that are required for reliable recovery. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we establish existence and compute explicit values of thresholds \ThreshLow and \ThreshUp with the following properties: for any Ï”>0\epsilon > 0, if \numobs > 2 (\ThreshUp + \epsilon) \log (\mdim - \spindex) + \spindex + 1, then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for \numobs < 2 (\ThreshLow - \epsilon) \log (\mdim - \spindex) + \spindex + 1, then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble, we show that \ThreshLow = \ThreshUp = 1, so that the threshold is sharp and exactly determined.Comment: Appeared as Technical Report 708, Department of Statistics, UC Berkele

    Orthogonal Matching Pursuit: A Brownian Motion Analysis

    Full text link
    A well-known analysis of Tropp and Gilbert shows that orthogonal matching pursuit (OMP) can recover a k-sparse n-dimensional real vector from 4 k log(n) noise-free linear measurements obtained through a random Gaussian measurement matrix with a probability that approaches one as n approaches infinity. This work strengthens this result by showing that a lower number of measurements, 2 k log(n - k), is in fact sufficient for asymptotic recovery. More generally, when the sparsity level satisfies kmin <= k <= kmax but is unknown, 2 kmax log(n - kmin) measurements is sufficient. Furthermore, this number of measurements is also sufficient for detection of the sparsity pattern (support) of the vector with measurement errors provided the signal-to-noise ratio (SNR) scales to infinity. The scaling 2 k log(n - k) exactly matches the number of measurements required by the more complex lasso method for signal recovery with a similar SNR scaling.Comment: 11 pages, 2 figure

    Approximate Sparsity Pattern Recovery: Information-Theoretic Lower Bounds

    Full text link
    Recovery of the sparsity pattern (or support) of an unknown sparse vector from a small number of noisy linear measurements is an important problem in compressed sensing. In this paper, the high-dimensional setting is considered. It is shown that if the measurement rate and per-sample signal-to-noise ratio (SNR) are finite constants independent of the length of the vector, then the optimal sparsity pattern estimate will have a constant fraction of errors. Lower bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector. The tightness of the bounds in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing achievable bounds. Near optimality is shown for a wide variety of practically motivated signal models

    Compressed Sensing over the Grassmann Manifold: A Unified Analytical Framework

    Get PDF
    It is well known that compressed sensing problems reduce to finding the sparse solutions for large under-determined systems of equations. Although finding the sparse solutions in general may be computationally difficult, starting with the seminal work of [2], it has been shown that linear programming techniques, obtained from an l_(1)-norm relaxation of the original non-convex problem, can provably find the unknown vector in certain instances. In particular, using a certain restricted isometry property, [2] shows that for measurement matrices chosen from a random Gaussian ensemble, l_1 optimization can find the correct solution with overwhelming probability even when the support size of the unknown vector is proportional to its dimension. The paper [1] uses results on neighborly polytopes from [6] to give a ldquosharprdquo bound on what this proportionality should be in the Gaussian measurement ensemble. In this paper we shall focus on finding sharp bounds on the recovery of ldquoapproximately sparserdquo signals (also possibly under noisy measurements). While the restricted isometry property can be used to study the recovery of approximately sparse signals (and also in the presence of noisy measurements), the obtained bounds can be quite loose. On the other hand, the neighborly polytopes technique which yields sharp bounds for ideally sparse signals cannot be generalized to approximately sparse signals. In this paper, starting from a necessary and sufficient condition for achieving a certain signal recovery accuracy, using high-dimensional geometry, we give a unified null-space Grassmannian angle-based analytical framework for compressive sensing. This new framework gives sharp quantitative tradeoffs between the signal sparsity and the recovery accuracy of the l_1 optimization for approximately sparse signals. As it will turn out, the neighborly polytopes result of [1] for ideally sparse signals can be viewed as a special case of ours. Our result concerns fundamental properties of linear subspaces and so may be of independent mathematical interest

    On sharp performance bounds for robust sparse signal recoveries

    Get PDF
    It is well known in compressive sensing that l_1 minimization can recover the sparsest solution for a large class of underdetermined systems of linear equations, provided the signal is sufficiently sparse. In this paper, we compute sharp performance bounds for several different notions of robustness in sparse signal recovery via l_1 minimization. In particular, we determine necessary and sufficient conditions for the measurement matrix A under which l_1 minimization guarantees the robustness of sparse signal recovery in the "weak", "sectional" and "strong" (e.g., robustness for "almost all" approximately sparse signals, or instead for "all" approximately sparse signals). Based on these characterizations, we are able to compute sharp performance bounds on the tradeoff between signal sparsity and signal recovery robustness in these various senses. Our results are based on a high-dimensional geometrical analysis of the null-space of the measurement matrix A. These results generalize the thresholds results for purely sparse signals and also present generalized insights on l_1 minimization for recovering purely sparse signals from a null-space perspective
    • 

    corecore