812 research outputs found

    TV-min and Greedy Pursuit for Constrained Joint Sparsity and Application to Inverse Scattering

    Full text link
    This paper proposes a general framework for compressed sensing of constrained joint sparsity (CJS) which includes total variation minimization (TV-min) as an example. TV- and 2-norm error bounds, independent of the ambient dimension, are derived for the CJS version of Basis Pursuit and Orthogonal Matching Pursuit. As an application the results extend Cand`es, Romberg and Tao's proof of exact recovery of piecewise constant objects with noiseless incomplete Fourier data to the case of noisy data.Comment: Mathematics and Mechanics of Complex Systems (2013

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Sparse Filter Design Under a Quadratic Constraint: Low-Complexity Algorithms

    Get PDF
    This paper considers three problems in sparse filter design, the first involving a weighted least-squares constraint on the frequency response, the second a constraint on mean squared error in estimation, and the third a constraint on signal-to-noise ratio in detection. The three problems are unified under a single framework based on sparsity maximization under a quadratic performance constraint. Efficient and exact solutions are developed for specific cases in which the matrix in the quadratic constraint is diagonal, block-diagonal, banded, or has low condition number. For the more difficult general case, a low-complexity algorithm based on backward greedy selection is described with emphasis on its efficient implementation. Examples in wireless channel equalization and minimum-variance distortionless-response beamforming show that the backward selection algorithm yields optimally sparse designs in many instances while also highlighting the benefits of sparse design.Texas Instruments Leadership University Consortium Progra
    • …
    corecore