2,409 research outputs found

    Sharp RIP Bound for Sparse Signal and Low-Rank Matrix Recovery

    Get PDF
    This paper establishes a sharp condition on the restricted isometry property (RIP) for both the sparse signal recovery and low-rank matrix recovery. It is shown that if the measurement matrix AA satisfies the RIP condition Ξ΄kA<1/3\delta_k^A<1/3, then all kk-sparse signals Ξ²\beta can be recovered exactly via the constrained β„“1\ell_1 minimization based on y=AΞ²y=A\beta. Similarly, if the linear map M\cal M satisfies the RIP condition Ξ΄rM<1/3\delta_r^{\cal M}<1/3, then all matrices XX of rank at most rr can be recovered exactly via the constrained nuclear norm minimization based on b=M(X)b={\cal M}(X). Furthermore, in both cases it is not possible to do so in general when the condition does not hold. In addition, noisy cases are considered and oracle inequalities are given under the sharp RIP condition.Comment: to appear in Applied and Computational Harmonic Analysis (2012

    Noisy Signal Recovery via Iterative Reweighted L1-Minimization

    Get PDF
    Compressed sensing has shown that it is possible to reconstruct sparse high dimensional signals from few linear measurements. In many cases, the solution can be obtained by solving an L1-minimization problem, and this method is accurate even in the presence of noise. Recent a modified version of this method, reweighted L1-minimization, has been suggested. Although no provable results have yet been attained, empirical studies have suggested the reweighted version outperforms the standard method. Here we analyze the reweighted L1-minimization method in the noisy case, and provide provable results showing an improvement in the error bound over the standard bounds

    Sparse Representation of a Polytope and Recovery of Sparse Signals and Low-rank Matrices

    Get PDF
    This paper considers compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that for any given constant tβ‰₯4/3t\ge {4/3}, in compressed sensing Ξ΄tkA<(tβˆ’1)/t\delta_{tk}^A < \sqrt{(t-1)/t} guarantees the exact recovery of all kk sparse signals in the noiseless case through the constrained β„“1\ell_1 minimization, and similarly in affine rank minimization Ξ΄trM<(tβˆ’1)/t\delta_{tr}^\mathcal{M}< \sqrt{(t-1)/t} ensures the exact reconstruction of all matrices with rank at most rr in the noiseless case via the constrained nuclear norm minimization. Moreover, for any Ο΅>0\epsilon>0, Ξ΄tkA<tβˆ’1t+Ο΅\delta_{tk}^A<\sqrt{\frac{t-1}{t}}+\epsilon is not sufficient to guarantee the exact recovery of all kk-sparse signals for large kk. Similar result also holds for matrix recovery. In addition, the conditions Ξ΄tkA<(tβˆ’1)/t\delta_{tk}^A < \sqrt{(t-1)/t} and Ξ΄trM<(tβˆ’1)/t\delta_{tr}^\mathcal{M}< \sqrt{(t-1)/t} are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case.Comment: to appear in IEEE Transactions on Information Theor

    TV-min and Greedy Pursuit for Constrained Joint Sparsity and Application to Inverse Scattering

    Full text link
    This paper proposes a general framework for compressed sensing of constrained joint sparsity (CJS) which includes total variation minimization (TV-min) as an example. TV- and 2-norm error bounds, independent of the ambient dimension, are derived for the CJS version of Basis Pursuit and Orthogonal Matching Pursuit. As an application the results extend Cand`es, Romberg and Tao's proof of exact recovery of piecewise constant objects with noiseless incomplete Fourier data to the case of noisy data.Comment: Mathematics and Mechanics of Complex Systems (2013

    Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

    Get PDF
    This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements -- L_1-minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of the Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of the L_1-minimization. Our algorithm ROMP reconstructs a sparse signal in a number of iterations linear in the sparsity (in practice even logarithmic), and the reconstruction is exact provided the linear measurements satisfy the Uniform Uncertainty Principle.Comment: This is the final version of the paper, including referee suggestion
    • …
    corecore