848 research outputs found

    Compressed Sensing over â„“p\ell_p-balls: Minimax Mean Square Error

    Full text link
    We consider the compressed sensing problem, where the object x_0 \in \bR^N is to be recovered from incomplete measurements y=Ax0+zy = Ax_0 + z; here the sensing matrix AA is an n×Nn \times N random matrix with iid Gaussian entries and n<Nn < N. A popular method of sparsity-promoting reconstruction is ℓ1\ell^1-penalized least-squares reconstruction (aka LASSO, Basis Pursuit). It is currently popular to consider the strict sparsity model, where the object x0x_0 is nonzero in only a small fraction of entries. In this paper, we instead consider the much more broadly applicable ℓp\ell_p-sparsity model, where x0x_0 is sparse in the sense of having ℓp\ell_p norm bounded by ξ⋅N1/p\xi \cdot N^{1/p} for some fixed 000 0. We study an asymptotic regime in which nn and NN both tend to infinity with limiting ratio n/N=δ∈(0,1)n/N = \delta \in (0,1), both in the noisy (z≠0z \neq 0) and noiseless (z=0z=0) cases. Under weak assumptions on x0x_0, we are able to precisely evaluate the worst-case asymptotic minimax mean-squared reconstruction error (AMSE) for ℓ1\ell^1 penalized least-squares: min over penalization parameters, max over ℓp\ell_p-sparse objects x0x_0. We exhibit the asymptotically least-favorable object (hardest sparse signal to recover) and the maximin penalization. Our explicit formulas unexpectedly involve quantities appearing classically in statistical decision theory. Occurring in the present setting, they reflect a deeper connection between penalized ℓ1\ell^1 minimization and scalar soft thresholding. This connection, which follows from earlier work of the authors and collaborators on the AMP iterative thresholding algorithm, is carefully explained. Our approach also gives precise results under weak-ℓp\ell_p ball coefficient constraints, as we show here.Comment: 41 pages, 11 pdf figure

    Atomic norm denoising with applications to line spectral estimation

    Get PDF
    Motivated by recent work on atomic norms in inverse problems, we propose a new approach to line spectral estimation that provides theoretical guarantees for the mean-squared-error (MSE) performance in the presence of noise and without knowledge of the model order. We propose an abstract theory of denoising with atomic norms and specialize this theory to provide a convex optimization problem for estimating the frequencies and phases of a mixture of complex exponentials. We show that the associated convex optimization problem can be solved in polynomial time via semidefinite programming (SDP). We also show that the SDP can be approximated by an l1-regularized least-squares problem that achieves nearly the same error rate as the SDP but can scale to much larger problems. We compare both SDP and l1-based approaches with classical line spectral analysis methods and demonstrate that the SDP outperforms the l1 optimization which outperforms MUSIC, Cadzow's, and Matrix Pencil approaches in terms of MSE over a wide range of signal-to-noise ratios.Comment: 27 pages, 10 figures. A preliminary version of this work appeared in the Proceedings of the 49th Annual Allerton Conference in September 2011. Numerous numerical experiments added to this version in accordance with suggestions by anonymous reviewer

    Approximate Message Passing-based Compressed Sensing Reconstruction with Generalized Elastic Net Prior

    Get PDF
    In this paper, we study the compressed sensing reconstruction problem with generalized elastic net prior (GENP), where a sparse signal is sampled via a noisy underdetermined linear observation system, and an additional initial estimation of the signal (the GENP) is available during the reconstruction. We first incorporate the GENP into the LASSO and the approximate message passing (AMP) frameworks, denoted by GENP-LASSO and GENP-AMP respectively. We then focus on GENP-AMP and investigate its parameter selection, state evolution, and noise-sensitivity analysis. A practical parameterless version of the GENP-AMP is also developed, which does not need to know the sparsity of the unknown signal and the variance of the GENP. Simulation results with 1-D data and two different imaging applications are presented to demonstrate the efficiency of the proposed schemes

    Optimal Phase Transitions in Compressed Sensing

    Full text link
    Compressed sensing deals with efficient recovery of analog signals from linear encodings. This paper presents a statistical study of compressed sensing by modeling the input signal as an i.i.d. process with known distribution. Three classes of encoders are considered, namely optimal nonlinear, optimal linear and random linear encoders. Focusing on optimal decoders, we investigate the fundamental tradeoff between measurement rate and reconstruction fidelity gauged by error probability and noise sensitivity in the absence and presence of measurement noise, respectively. The optimal phase transition threshold is determined as a functional of the input distribution and compared to suboptimal thresholds achieved by popular reconstruction algorithms. In particular, we show that Gaussian sensing matrices incur no penalty on the phase transition threshold with respect to optimal nonlinear encoding. Our results also provide a rigorous justification of previous results based on replica heuristics in the weak-noise regime.Comment: to appear in IEEE Transactions of Information Theor
    • …
    corecore