257 research outputs found

    A quasi-Newton proximal splitting method

    Get PDF
    A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification

    Iteration-Complexity of a Generalized Forward Backward Splitting Algorithm

    Full text link
    In this paper, we analyze the iteration-complexity of Generalized Forward--Backward (GFB) splitting algorithm, as proposed in \cite{gfb2011}, for minimizing a large class of composite objectives f+∑i=1nhif + \sum_{i=1}^n h_i on a Hilbert space, where ff has a Lipschitz-continuous gradient and the hih_i's are simple (\ie their proximity operators are easy to compute). We derive iteration-complexity bounds (pointwise and ergodic) for the inexact version of GFB to obtain an approximate solution based on an easily verifiable termination criterion. Along the way, we prove complexity bounds for relaxed and inexact fixed point iterations built from composition of nonexpansive averaged operators. These results apply more generally to GFB when used to find a zero of a sum of n>0n > 0 maximal monotone operators and a co-coercive operator on a Hilbert space. The theoretical findings are exemplified with experiments on video processing.Comment: 5 pages, 2 figure

    Model Consistency of Partly Smooth Regularizers

    Full text link
    This paper studies least-square regression penalized with partly smooth convex regularizers. This class of functions is very large and versatile allowing to promote solutions conforming to some notion of low-complexity. Indeed, they force solutions of variational problems to belong to a low-dimensional manifold (the so-called model) which is stable under small perturbations of the function. This property is crucial to make the underlying low-complexity model robust to small noise. We show that a generalized "irrepresentable condition" implies stable model selection under small noise perturbations in the observations and the design matrix, when the regularization parameter is tuned proportionally to the noise level. This condition is shown to be almost a necessary condition. We then show that this condition implies model consistency of the regularized estimator. That is, with a probability tending to one as the number of measurements increases, the regularized estimator belongs to the correct low-dimensional model manifold. This work unifies and generalizes several previous ones, where model consistency is known to hold for sparse, group sparse, total variation and low-rank regularizations

    Sparse Support Recovery with Non-smooth Loss Functions

    Get PDF
    In this paper, we study the support recovery guarantees of underdetermined sparse regression using the ℓ1\ell_1-norm as a regularizer and a non-smooth loss function for data fidelity. More precisely, we focus in detail on the cases of ℓ1\ell_1 and ℓ∞\ell_\infty losses, and contrast them with the usual ℓ2\ell_2 loss. While these losses are routinely used to account for either sparse (ℓ1\ell_1 loss) or uniform (ℓ∞\ell_\infty loss) noise models, a theoretical analysis of their performance is still lacking. In this article, we extend the existing theory from the smooth ℓ2\ell_2 case to these non-smooth cases. We derive a sharp condition which ensures that the support of the vector to recover is stable to small additive noise in the observations, as long as the loss constraint size is tuned proportionally to the noise level. A distinctive feature of our theory is that it also explains what happens when the support is unstable. While the support is not stable anymore, we identify an "extended support" and show that this extended support is stable to small additive noise. To exemplify the usefulness of our theory, we give a detailed numerical analysis of the support stability/instability of compressed sensing recovery with these different losses. This highlights different parameter regimes, ranging from total support stability to progressively increasing support instability.Comment: in Proc. NIPS 201

    Curvelets and Ridgelets

    Get PDF
    International audienceDespite the fact that wavelets have had a wide impact in image processing, they fail to efficiently represent objects with highly anisotropic elements such as lines or curvilinear structures (e.g. edges). The reason is that wavelets are non-geometrical and do not exploit the regularity of the edge curve. The Ridgelet and the Curvelet [3, 4] transforms were developed as an answer to the weakness of the separable wavelet transform in sparsely representing what appears to be simple building atoms in an image, that is lines, curves and edges. Curvelets and ridgelets take the form of basis elements which exhibit high directional sensitivity and are highly anisotropic [5, 6, 7, 8]. These very recent geometric image representations are built upon ideas of multiscale analysis and geometry. They have had an important success in a wide range of image processing applications including denoising [8, 9, 10], deconvolution [11, 12], contrast enhancement [13], texture analysis [14, 15], detection [16], watermarking [17], component separation [18], inpainting [19, 20] or blind source separation[21, 22]. Curvelets have also proven useful in diverse fields beyond the traditional image processing application. Let’s cite for example seismic imaging [10, 23, 24], astronomical imaging [25, 26, 27], scientific computing and analysis of partial differential equations [28, 29]. Another reason for the success of ridgelets and curvelets is the availability of fast transform algorithms which are available in non-commercial software packages following the philosophy of reproducible research, see [30, 31]

    Image Decomposition and Separation Using Sparse Representations: An Overview

    Get PDF
    This paper gives essential insights into the use of sparsity and morphological diversity in image decomposition and source separation by reviewing our recent work in this field. The idea to morphologically decompose a signal into its building blocks is an important problem in signal processing and has far-reaching applications in science and technology. Starck , proposed a novel decomposition method—morphological component analysis (MCA)—based on sparse representation of signals. MCA assumes that each (monochannel) signal is the linear mixture of several layers, the so-called morphological components, that are morphologically distinct, e.g., sines and bumps. The success of this method relies on two tenets: sparsity and morphological diversity. That is, each morphological component is sparsely represented in a specific transform domain, and the latter is highly inefficient in representing the other content in the mixture. Once such transforms are identified, MCA is an iterative thresholding algorithm that is capable of decoupling the signal content. Sparsity and morphological diversity have also been used as a novel and effective source of diversity for blind source separation (BSS), hence extending the MCA to multichannel data. Building on these ingredients, we will provide an overview the generalized MCA introduced by the authors in and as a fast and efficient BSS method. We will illustrate the application of these algorithms on several real examples. We conclude our tour by briefly describing our software toolboxes made available for download on the Internet for sparse signal and image decomposition and separation

    Numerical Issues When Using Wavelets

    Get PDF
    International audienceWavelets and related multiscale representations pervade all areas of signal processing. The recent inclusion of wavelet algorithms in JPEG 2000 – the new still-picture compression standard– testifies to this lasting and significant impact. The reason of the success of the wavelets is due to the fact that wavelet basis represents well a large class of signals, and therefore allows us to detect roughly isotropic elements occurring at all spatial scales and locations. As the noise in the physical sciences is often not Gaussian, the modeling, in the wavelet space, of many kind of noise (Poisson noise, combination of Gaussian and Poisson noise, long-memory 1/f noise, non-stationary noise, ...) has also been a key step for the use of wavelets in scientific, medical, or industrial applications [1]. Extensive wavelet packages exist now, commercial (see for example [2]) or non commercial (see for example [3, 4]), which allows any researcher, doctor, or engineer to analyze his data using wavelets

    Sparse representations and bayesian image inpainting

    Get PDF
    International audienceRepresenting the image to be inpainted in an appropriate sparse dictionary, and combining elements from bayesian statistics, we introduce an expectation-maximization (EM) algorithm for image inpainting. From a statistical point of view, the inpainting can be viewed as an estimation problem with missing data. Towards this goal, we propose the idea of using the EM mechanism in a bayesian framework, where a sparsity promoting prior penalty is imposed on the reconstructed coefficients. The EM framework gives a principled way to establish formally the idea that missing samples can be recovered based on sparse representations. We first introduce an easy and efficient sparse-representation-based iterative algorithm for image inpainting. Additionally, we derive its theoretical convergence properties for a wide class of penalties. Particularly, we establish that it converges in a strong sense, and give sufficient conditions for convergence to a local or a global minimum. Compared to its competitors, this algorithms allows a high degree of flexibility to recover different structural components in the image (piece-wise smooth, curvilinear, texture, etc). We also describe some ideas to automatically find the regularization parameter

    Monotone operator splitting for optimization problems in sparse recovery

    Get PDF
    International audienceThis work focuses on several optimization problems involved in recovery of sparse solutions of linear inverse problems. Such problems appear in many fields including image and signal processing, and have attracted even more interest since the emergence of the compressed sensing (CS) theory. In this paper, we formalize many of these optimization problems within a unified framework of convex optimization theory, and invoke tools from convex analysis and maximal monotone operator splitting. We characterize all these optimization problems, and to solve them, we propose fast iterative convergent algorithms using forward-backward and/or Peaceman/Douglas-Rachford splitting iterations. With non-differentiable sparsity-promoting penalties, the proposed algorithms are essentially based on iterative shrinkage. This makes them very competitive for large-scale problems. We also report some experiments on image reconstruction in CS to demonstrate the applicability of the proposed framework

    An overview of inverse problem regularization using sparsity

    Get PDF
    International audienceSparsity constraints are now very popular to regularize inverse problems. We review several approaches which have been proposed in the last ten years to solve inverse problems such as inpainting, deconvolution or blind source separation. We will focus especially on optimization methods based on iterative thresholding methods to derive the solution
    • …
    corecore