3,187 research outputs found

    Compressed Sensing over the Grassmann Manifold: A Unified Analytical Framework

    Get PDF
    It is well known that compressed sensing problems reduce to finding the sparse solutions for large under-determined systems of equations. Although finding the sparse solutions in general may be computationally difficult, starting with the seminal work of [2], it has been shown that linear programming techniques, obtained from an l_(1)-norm relaxation of the original non-convex problem, can provably find the unknown vector in certain instances. In particular, using a certain restricted isometry property, [2] shows that for measurement matrices chosen from a random Gaussian ensemble, l_1 optimization can find the correct solution with overwhelming probability even when the support size of the unknown vector is proportional to its dimension. The paper [1] uses results on neighborly polytopes from [6] to give a ldquosharprdquo bound on what this proportionality should be in the Gaussian measurement ensemble. In this paper we shall focus on finding sharp bounds on the recovery of ldquoapproximately sparserdquo signals (also possibly under noisy measurements). While the restricted isometry property can be used to study the recovery of approximately sparse signals (and also in the presence of noisy measurements), the obtained bounds can be quite loose. On the other hand, the neighborly polytopes technique which yields sharp bounds for ideally sparse signals cannot be generalized to approximately sparse signals. In this paper, starting from a necessary and sufficient condition for achieving a certain signal recovery accuracy, using high-dimensional geometry, we give a unified null-space Grassmannian angle-based analytical framework for compressive sensing. This new framework gives sharp quantitative tradeoffs between the signal sparsity and the recovery accuracy of the l_1 optimization for approximately sparse signals. As it will turn out, the neighborly polytopes result of [1] for ideally sparse signals can be viewed as a special case of ours. Our result concerns fundamental properties of linear subspaces and so may be of independent mathematical interest

    Stable image reconstruction using total variation minimization

    Get PDF
    This article presents near-optimal guarantees for accurate and robust image recovery from under-sampled noisy measurements using total variation minimization. In particular, we show that from O(slog(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of suitably incoherent matrices.Comment: 25 page

    Nonuniform Sparse Recovery with Subgaussian Matrices

    Full text link
    Compressive sensing predicts that sufficiently sparse vectors can be recovered from highly incomplete information. Efficient recovery methods such as ℓ1\ell_1-minimization find the sparsest solution to certain systems of equations. Random matrices have become a popular choice for the measurement matrix. Indeed, near-optimal uniform recovery results have been shown for such matrices. In this note we focus on nonuniform recovery using Gaussian random matrices and ℓ1\ell_1-minimization. We provide a condition on the number of samples in terms of the sparsity and the signal length which guarantees that a fixed sparse signal can be recovered with a random draw of the matrix using ℓ1\ell_1-minimization. The constant 2 in the condition is optimal, and the proof is rather short compared to a similar result due to Donoho and Tanner

    Sharp Time--Data Tradeoffs for Linear Inverse Problems

    Full text link
    In this paper we characterize sharp time-data tradeoffs for optimization problems used for solving linear inverse problems. We focus on the minimization of a least-squares objective subject to a constraint defined as the sub-level set of a penalty function. We present a unified convergence analysis of the gradient projection algorithm applied to such problems. We sharply characterize the convergence rate associated with a wide variety of random measurement ensembles in terms of the number of measurements and structural complexity of the signal with respect to the chosen penalty function. The results apply to both convex and nonconvex constraints, demonstrating that a linear convergence rate is attainable even though the least squares objective is not strongly convex in these settings. When specialized to Gaussian measurements our results show that such linear convergence occurs when the number of measurements is merely 4 times the minimal number required to recover the desired signal at all (a.k.a. the phase transition). We also achieve a slower but geometric rate of convergence precisely above the phase transition point. Extensive numerical results suggest that the derived rates exactly match the empirical performance

    Sparse Representation of a Polytope and Recovery of Sparse Signals and Low-rank Matrices

    Get PDF
    This paper considers compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that for any given constant t≥4/3t\ge {4/3}, in compressed sensing δtkA<(t−1)/t\delta_{tk}^A < \sqrt{(t-1)/t} guarantees the exact recovery of all kk sparse signals in the noiseless case through the constrained ℓ1\ell_1 minimization, and similarly in affine rank minimization δtrM<(t−1)/t\delta_{tr}^\mathcal{M}< \sqrt{(t-1)/t} ensures the exact reconstruction of all matrices with rank at most rr in the noiseless case via the constrained nuclear norm minimization. Moreover, for any ϵ>0\epsilon>0, δtkA<t−1t+ϵ\delta_{tk}^A<\sqrt{\frac{t-1}{t}}+\epsilon is not sufficient to guarantee the exact recovery of all kk-sparse signals for large kk. Similar result also holds for matrix recovery. In addition, the conditions δtkA<(t−1)/t\delta_{tk}^A < \sqrt{(t-1)/t} and δtrM<(t−1)/t\delta_{tr}^\mathcal{M}< \sqrt{(t-1)/t} are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case.Comment: to appear in IEEE Transactions on Information Theor
    • …
    corecore