1,186 research outputs found

    Quadratically fast IRLS for sparse signal recovery

    Get PDF
    We present a new class of iterative algorithms for sparse recovery problems that combine iterative support detection and estimation. More precisely, these methods use a two state Gaussian scale mixture as a proxy for the signal model and can be interpreted both as iteratively reweighted least squares (IRLS) and Expectation/Maximization (EM) algorithms for the constrained maximization of the log-likelihood function. Under certain conditions, these methods are proved to converge to a sparse solution and to be quadratically fast in a neighborhood of that sparse solution, outperforming classical IRLS for lp-minimization. Numerical experiments validate the theoretical derivations and show that these new reconstruction schemes outperform classical IRLS for lp-minimization with p\in(0,1] in terms of rate of convergence and sparsity-undersampling tradeoff

    Fast IRLS for sparse reconstruction based on gaussian mixtures

    Get PDF
    The theory of compressed sensing has demonstrated that sparse signals can be reconstructed from few linear measurements. In this work, we propose a new class of iteratively reweighted least squares (IRLS) for sparse recovery. The proposed methods use a two state Gaussian scale mixture as a proxy for the signal model and can be interpreted as an Expectation Maximization algorithm that attempts to perform the constrained maximization of the log-likelihood function. Under some conditions, standard in the compressed sensing theory, the sequences generated by these algorithms converge to the fixed points of the maps that rule their dynamics. A condition for exact sparse recovery, that is verifible a posteriori, is derived and the convergence is proved to be quadratically fast in a neighborhood of the desired solution. Numerical experiments show that these new reconstructions schemes outperform classical IRLS for lp -minimization with p\in(0, 1] in terms of rate of convergence and accurac

    Nonconvex Nonsmooth Low-Rank Minimization via Iteratively Reweighted Nuclear Norm

    Full text link
    The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to perform a family of nonconvex surrogates of L0L_0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then we propose to solve the problem by Iteratively Reweighted Nuclear Norm (IRNN) algorithm. IRNN iteratively solves a Weighted Singular Value Thresholding (WSVT) problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low-rank matrix recovery compared with state-of-the-art convex algorithms
    • …
    corecore