4,155 research outputs found

    Jump-sparse and sparse recovery using Potts functionals

    Full text link
    We recover jump-sparse and sparse signals from blurred incomplete data corrupted by (possibly non-Gaussian) noise using inverse Potts energy functionals. We obtain analytical results (existence of minimizers, complexity) on inverse Potts functionals and provide relations to sparsity problems. We then propose a new optimization method for these functionals which is based on dynamic programming and the alternating direction method of multipliers (ADMM). A series of experiments shows that the proposed method yields very satisfactory jump-sparse and sparse reconstructions, respectively. We highlight the capability of the method by comparing it with classical and recent approaches such as TV minimization (jump-sparse signals), orthogonal matching pursuit, iterative hard thresholding, and iteratively reweighted 1\ell^1 minimization (sparse signals)

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Data-Driven Time-Frequency Analysis

    Get PDF
    In this paper, we introduce a new adaptive data analysis method to study trend and instantaneous frequency of nonlinear and non-stationary data. This method is inspired by the Empirical Mode Decomposition method (EMD) and the recently developed compressed (compressive) sensing theory. The main idea is to look for the sparsest representation of multiscale data within the largest possible dictionary consisting of intrinsic mode functions of the form {a(t)cos(θ(t))}\{a(t) \cos(\theta(t))\}, where aV(θ)a \in V(\theta), V(θ)V(\theta) consists of the functions smoother than cos(θ(t))\cos(\theta(t)) and θ0\theta'\ge 0. This problem can be formulated as a nonlinear L0L^0 optimization problem. In order to solve this optimization problem, we propose a nonlinear matching pursuit method by generalizing the classical matching pursuit for the L0L^0 optimization problem. One important advantage of this nonlinear matching pursuit method is it can be implemented very efficiently and is very stable to noise. Further, we provide a convergence analysis of our nonlinear matching pursuit method under certain scale separation assumptions. Extensive numerical examples will be given to demonstrate the robustness of our method and comparison will be made with the EMD/EEMD method. We also apply our method to study data without scale separation, data with intra-wave frequency modulation, and data with incomplete or under-sampled data

    Optimization with Sparsity-Inducing Penalties

    Get PDF
    Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted 2\ell_2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view
    corecore