1,261 research outputs found

    Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms

    Full text link
    This paper treats the problem of minimizing a general continuously differentiable function subject to sparsity constraints. We present and analyze several different optimality criteria which are based on the notions of stationarity and coordinate-wise optimality. These conditions are then used to derive three numerical algorithms aimed at finding points satisfying the resulting optimality criteria: the iterative hard thresholding method and the greedy and partial sparse-simplex methods. The first algorithm is essentially a gradient projection method while the remaining two algorithms are of coordinate descent type. The theoretical convergence of these methods and their relations to the derived optimality conditions are studied. The algorithms and results are illustrated by several numerical examples.Comment: submitted to SIAM Optimizatio

    Analysis and Synthesis Prior Greedy Algorithms for Non-linear Sparse Recovery

    Full text link
    In this work we address the problem of recovering sparse solutions to non linear inverse problems. We look at two variants of the basic problem, the synthesis prior problem when the solution is sparse and the analysis prior problem where the solution is cosparse in some linear basis. For the first problem, we propose non linear variants of the Orthogonal Matching Pursuit (OMP) and CoSamp algorithms; for the second problem we propose a non linear variant of the Greedy Analysis Pursuit (GAP) algorithm. We empirically test the success rates of our algorithms on exponential and logarithmic functions. We model speckle denoising as a non linear sparse recovery problem and apply our technique to solve it. Results show that our method outperforms state of the art methods in ultrasound speckle denoising

    A new and improved quantitative recovery analysis for iterative hard thresholding algorithms in compressed sensing

    Get PDF
    We present a new recovery analysis for a standard compressed sensing algorithm, Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2008), which considers the fixed points of the algorithm. In the context of arbitrary measurement matrices, we derive a sufficient condition for convergence of IHT to a fixed point and a necessary condition for the existence of fixed points. These conditions allow us to perform a sparse signal recovery analysis in the deterministic noiseless case by implying that the original sparse signal is the unique fixed point and limit point of IHT, and in the case of Gaussian measurement matrices and noise by generating a bound on the approximation error of the IHT limit as a multiple of the noise level. By generalizing the notion of fixed points, we extend our analysis to the variable stepsize Normalised IHT (N-IHT) (Blumensath and Davies, 2010). For both stepsize schemes, we obtain asymptotic phase transitions in a proportional-dimensional framework, quantifying the sparsity/undersampling trade-off for which recovery is guaranteed. Exploiting the reasonable average-case assumption that the underlying signal and measurement matrix are independent, comparison with previous results within this framework shows a substantial quantitative improvement
    • …
    corecore