446 research outputs found

    Computational Methods for Sparse Solution of Linear Inverse Problems

    Get PDF
    The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications

    Finding sparse solutions of systems of polynomial equations via group-sparsity optimization

    Get PDF
    The paper deals with the problem of finding sparse solutions to systems of polynomial equations possibly perturbed by noise. In particular, we show how these solutions can be recovered from group-sparse solutions of a derived system of linear equations. Then, two approaches are considered to find these group-sparse solutions. The first one is based on a convex relaxation resulting in a second-order cone programming formulation which can benefit from efficient reweighting techniques for sparsity enhancement. For this approach, sufficient conditions for the exact recovery of the sparsest solution to the polynomial system are derived in the noiseless setting, while stable recovery results are obtained for the noisy case. Though lacking a similar analysis, the second approach provides a more computationally efficient algorithm based on a greedy strategy adding the groups one-by-one. With respect to previous work, the proposed methods recover the sparsest solution in a very short computing time while remaining at least as accurate in terms of the probability of success. This probability is empirically analyzed to emphasize the relationship between the ability of the methods to solve the polynomial system and the sparsity of the solution.Comment: Journal of Global Optimization (2014) to appea

    Phase Retrieval with Application to Optical Imaging

    Get PDF
    This review article provides a contemporary overview of phase retrieval in optical imaging, linking the relevant optical physics to the information processing methods and algorithms. Its purpose is to describe the current state of the art in this area, identify challenges, and suggest vision and areas where signal processing methods can have a large impact on optical imaging and on the world of imaging at large, with applications in a variety of fields ranging from biology and chemistry to physics and engineering

    Phase Retrieval From Binary Measurements

    Full text link
    We consider the problem of signal reconstruction from quadratic measurements that are encoded as +1 or -1 depending on whether they exceed a predetermined positive threshold or not. Binary measurements are fast to acquire and inexpensive in terms of hardware. We formulate the problem of signal reconstruction using a consistency criterion, wherein one seeks to find a signal that is in agreement with the measurements. To enforce consistency, we construct a convex cost using a one-sided quadratic penalty and minimize it using an iterative accelerated projected gradient-descent (APGD) technique. The PGD scheme reduces the cost function in each iteration, whereas incorporating momentum into PGD, notwithstanding the lack of such a descent property, exhibits faster convergence than PGD empirically. We refer to the resulting algorithm as binary phase retrieval (BPR). Considering additive white noise contamination prior to quantization, we also derive the Cramer-Rao Bound (CRB) for the binary encoding model. Experimental results demonstrate that the BPR algorithm yields a signal-to- reconstruction error ratio (SRER) of approximately 25 dB in the absence of noise. In the presence of noise prior to quantization, the SRER is within 2 to 3 dB of the CRB

    Undersampled Phase Retrieval with Outliers

    Full text link
    We propose a general framework for reconstructing transform-sparse images from undersampled (squared)-magnitude data corrupted with outliers. This framework is implemented using a multi-layered approach, combining multiple initializations (to address the nonconvexity of the phase retrieval problem), repeated minimization of a convex majorizer (surrogate for a nonconvex objective function), and iterative optimization using the alternating directions method of multipliers. Exploiting the generality of this framework, we investigate using a Laplace measurement noise model better adapted to outliers present in the data than the conventional Gaussian noise model. Using simulations, we explore the sensitivity of the method to both the regularization and penalty parameters. We include 1D Monte Carlo and 2D image reconstruction comparisons with alternative phase retrieval algorithms. The results suggest the proposed method, with the Laplace noise model, both increases the likelihood of correct support recovery and reduces the mean squared error from measurements containing outliers. We also describe exciting extensions made possible by the generality of the proposed framework, including regularization using analysis-form sparsity priors that are incompatible with many existing approaches.Comment: 11 pages, 9 figure

    Recovery of binary sparse signals from compressed linear measurements via polynomial optimization

    Get PDF
    The recovery of signals with finite-valued components from few linear measurements is a problem with widespread applications and interesting mathematical characteristics. In the compressed sensing framework, tailored methods have been recently proposed to deal with the case of finite-valued sparse signals. In this work, we focus on binary sparse signals and we propose a novel formulation, based on polynomial optimization. This approach is analyzed and compared to the state-of-the-art binary compressed sensing methods

    Learning Model-Based Sparsity via Projected Gradient Descent

    Full text link
    Several convex formulation methods have been proposed previously for statistical estimation with structured sparsity as the prior. These methods often require a carefully tuned regularization parameter, often a cumbersome or heuristic exercise. Furthermore, the estimate that these methods produce might not belong to the desired sparsity model, albeit accurately approximating the true parameter. Therefore, greedy-type algorithms could often be more desirable in estimating structured-sparse parameters. So far, these greedy methods have mostly focused on linear statistical models. In this paper we study the projected gradient descent with non-convex structured-sparse parameter model as the constraint set. Should the cost function have a Stable Model-Restricted Hessian the algorithm produces an approximation for the desired minimizer. As an example we elaborate on application of the main results to estimation in Generalized Linear Model
    • …
    corecore