128 research outputs found

    Simple Error Bounds for Regularized Noisy Linear Inverse Problems

    Get PDF
    Consider estimating a structured signal x0\mathbf{x}_0 from linear, underdetermined and noisy measurements y=Ax0+z\mathbf{y}=\mathbf{A}\mathbf{x}_0+\mathbf{z}, via solving a variant of the lasso algorithm: x^=arg⁑min⁑x{βˆ₯yβˆ’Axβˆ₯2+Ξ»f(x)}\hat{\mathbf{x}}=\arg\min_\mathbf{x}\{ \|\mathbf{y}-\mathbf{A}\mathbf{x}\|_2+\lambda f(\mathbf{x})\}. Here, ff is a convex function aiming to promote the structure of x0\mathbf{x}_0, say β„“1\ell_1-norm to promote sparsity or nuclear norm to promote low-rankness. We assume that the entries of A\mathbf{A} are independent and normally distributed and make no assumptions on the noise vector z\mathbf{z}, other than it being independent of A\mathbf{A}. Under this generic setup, we derive a general, non-asymptotic and rather tight upper bound on the β„“2\ell_2-norm of the estimation error βˆ₯x^βˆ’x0βˆ₯2\|\hat{\mathbf{x}}-\mathbf{x}_0\|_2. Our bound is geometric in nature and obeys a simple formula; the roles of Ξ»\lambda, ff and x0\mathbf{x}_0 are all captured by a single summary parameter Ξ΄(Ξ»βˆ‚((f(x0)))\delta(\lambda\partial((f(\mathbf{x}_0))), termed the Gaussian squared distance to the scaled subdifferential. We connect our result to the literature and verify its validity through simulations.Comment: 6pages, 2 figur

    Simple Bounds for Noisy Linear Inverse Problems with Exact Side Information

    Get PDF
    This paper considers the linear inverse problem where we wish to estimate a structured signal xx from its corrupted observations. When the problem is ill-posed, it is natural to make use of a convex function f(β‹…)f(\cdot) that exploits the structure of the signal. For example, β„“1\ell_1 norm can be used for sparse signals. To carry out the estimation, we consider two well-known convex programs: 1) Second order cone program (SOCP), and, 2) Lasso. Assuming Gaussian measurements, we show that, if precise information about the value f(x)f(x) or the β„“2\ell_2-norm of the noise is available, one can do a particularly good job at estimation. In particular, the reconstruction error becomes proportional to the "sparsity" of the signal rather than the ambient dimension of the noise vector. We connect our results to existing works and provide a discussion on the relation of our results to the standard least-squares problem. Our error bounds are non-asymptotic and sharp, they apply to arbitrary convex functions and do not assume any distribution on the noise.Comment: 13 page

    Isotropically Random Orthogonal Matrices: Performance of LASSO and Minimum Conic Singular Values

    Full text link
    Recently, the precise performance of the Generalized LASSO algorithm for recovering structured signals from compressed noisy measurements, obtained via i.i.d. Gaussian matrices, has been characterized. The analysis is based on a framework introduced by Stojnic and heavily relies on the use of Gordon's Gaussian min-max theorem (GMT), a comparison principle on Gaussian processes. As a result, corresponding characterizations for other ensembles of measurement matrices have not been developed. In this work, we analyze the corresponding performance of the ensemble of isotropically random orthogonal (i.r.o.) measurements. We consider the constrained version of the Generalized LASSO and derive a sharp characterization of its normalized squared error in the large-system limit. When compared to its Gaussian counterpart, our result analytically confirms the superiority in performance of the i.r.o. ensemble. Our second result, derives an asymptotic lower bound on the minimum conic singular values of i.r.o. matrices. This bound is larger than the corresponding bound on Gaussian matrices. To prove our results we express i.r.o. matrices in terms of Gaussians and show that, with some modifications, the GMT framework is still applicable
    • …
    corecore