72 research outputs found

    Gaussian Mixtures Based IRLS for Sparse Recovery With Quadratic Convergence

    Get PDF
    In this paper, we propose a new class of iteratively re-weighted least squares (IRLS) for sparse recovery problems. The proposed methods are inspired by constrained maximum-likelihood estimation under a Gaussian scale mixture (GSM) distribution assumption. In the noise-free setting, we provide sufficient conditions ensuring the convergence of the sequences generated by these algorithms to the set of fixed points of the maps that rule their dynamics and derive conditions verifiable a posteriori for the convergence to a sparse solution. We further prove that these algorithms are quadratically fast in a neighborhood of a sparse solution. We show through numerical experiments that the proposed methods outperform classical IRLS for l_p-minimization with p\in(0,1] in terms of speed and of sparsity-undersampling tradeoff and are robust even in presence of noise. The simplicity and the theoretical guarantees provided in this paper make this class of algorithms an attractive solution for sparse recovery problems

    Quadratically fast IRLS for sparse signal recovery

    Get PDF
    We present a new class of iterative algorithms for sparse recovery problems that combine iterative support detection and estimation. More precisely, these methods use a two state Gaussian scale mixture as a proxy for the signal model and can be interpreted both as iteratively reweighted least squares (IRLS) and Expectation/Maximization (EM) algorithms for the constrained maximization of the log-likelihood function. Under certain conditions, these methods are proved to converge to a sparse solution and to be quadratically fast in a neighborhood of that sparse solution, outperforming classical IRLS for lp-minimization. Numerical experiments validate the theoretical derivations and show that these new reconstruction schemes outperform classical IRLS for lp-minimization with p\in(0,1] in terms of rate of convergence and sparsity-undersampling tradeoff

    Distributed estimation from relative measurements of heterogeneous and uncertain quality

    Get PDF
    This paper studies the problem of estimation from relative measurements in a graph, in which a vector indexed over the nodes has to be reconstructed from pairwise measurements of differences between its components associated to nodes connected by an edge. In order to model heterogeneity and uncertainty of the measurements, we assume them to be affected by additive noise distributed according to a Gaussian mixture. In this original setup, we formulate the problem of computing the Maximum-Likelihood (ML) estimates and we design two novel algorithms, based on Least Squares regression and Expectation-Maximization (EM). The first algorithm (LS- EM) is centralized and performs the estimation from relative measurements, the soft classification of the measurements, and the estimation of the noise parameters. The second algorithm (Distributed LS-EM) is distributed and performs estimation and soft classification of the measurements, but requires the knowledge of the noise parameters. We provide rigorous proofs of convergence of both algorithms and we present numerical experiments to evaluate and compare their performance with classical solutions. The experiments show the robustness of the proposed methods against different kinds of noise and, for the Distributed LS-EM, against errors in the knowledge of noise parameters.Comment: Submitted to IEEE transaction

    Robust computation of linear models by convex relaxation

    Get PDF
    Consider a dataset of vector-valued observations that consists of noisy inliers, which are explained well by a low-dimensional subspace, along with some number of outliers. This work describes a convex optimization problem, called REAPER, that can reliably fit a low-dimensional model to this type of data. This approach parameterizes linear subspaces using orthogonal projectors, and it uses a relaxation of the set of orthogonal projectors to reach the convex formulation. The paper provides an efficient algorithm for solving the REAPER problem, and it documents numerical experiments which confirm that REAPER can dependably find linear structure in synthetic and natural data. In addition, when the inliers lie near a low-dimensional subspace, there is a rigorous theory that describes when REAPER can approximate this subspace.Comment: Formerly titled "Robust computation of linear models, or How to find a needle in a haystack

    Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models

    Get PDF
    Many problems of low-level computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or super-resolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higher-order Bayesian decision-making problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the density's mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, high-resolution images can be approximated for the first time, solving a variational optimization problem which is convex iff posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for real-world magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of real-world images.Comment: 34 pages, 6 figures, technical report (submitted

    Applying Compactness Constraints to Differential Traveltime Tomography

    Get PDF
    Tomographic imaging problems are typically ill-posed and often require the use of regularization techniques to guarantee a stable solution. Minimization of a weighted norm of model length is one commonly used secondary constraint. Tikhonov methods exploit low-order differential operators to select for solutions that are small, flat, or smooth in one or more dimensions. This class of regularizing functionals may not always be appropriate, particularly in cases where the anomaly being imaged is generated by a non-smooth spatial process. Timelapse imaging of flow-induced velocity anomalies is one such case; flow features are often characterized by spatial compactness or connectivity. By performing inversions on differenced arrival time data, the properties of the timelapse feature can be directly constrained. We develop a differential traveltime tomography algorithm which selects for compact solutions i.e. models with a minimum area of support, through application of model-space iteratively reweighted least squares. Our technique is an adaptation of minimum support regularization methods previously explored within the potential theory community. We compare our inversion algorithm to the results obtained by traditional Tikhonov regularization for two simple synthetic models; one including several sharp localized anomalies and a second with smoother features. We use a more complicated synthetic test case based on multiphase flow results to illustrate the efficacy of compactness constraints for contaminant infiltration imaging. We conclude by applying the algorithm to a CO[subscript 2] sequestration monitoring dataset acquired at the Frio pilot site. We observe that in cases where the assumption of a localized anomaly is correct, the addition of compactness constraints improves image quality by reducing tomographic artifacts and spatial smearing of target features.Massachusetts Institute of Technology. Earth Resources Laborator
    • …
    corecore