1,075 research outputs found

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    A first-order stochastic primal-dual algorithm with correction step

    Get PDF
    We investigate the convergence properties of a stochastic primal-dual splitting algorithm for solving structured monotone inclusions involving the sum of a cocoercive operator and a composite monotone operator. The proposed method is the stochastic extension to monotone inclusions of a proximal method studied in {\em Y. Drori, S. Sabach, and M. Teboulle, A simple algorithm for a class of nonsmooth convex-concave saddle-point problems, 2015} and {\em I. Loris and C. Verhoeven, On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty, 2011} for saddle point problems. It consists in a forward step determined by the stochastic evaluation of the cocoercive operator, a backward step in the dual variables involving the resolvent of the monotone operator, and an additional forward step using the stochastic evaluation of the cocoercive introduced in the first step. We prove weak almost sure convergence of the iterates by showing that the primal-dual sequence generated by the method is stochastic quasi Fej\'er-monotone with respect to the set of zeros of the considered primal and dual inclusions. Additional results on ergodic convergence in expectation are considered for the special case of saddle point models

    Semi-proximal Mirror-Prox for Nonsmooth Composite Minimization

    Get PDF
    We propose a new first-order optimisation algorithm to solve high-dimensional non-smooth composite minimisation problems. Typical examples of such problems have an objective that decomposes into a non-smooth empirical risk part and a non-smooth regularisation penalty. The proposed algorithm, called Semi-Proximal Mirror-Prox, leverages the Fenchel-type representation of one part of the objective while handling the other part of the objective via linear minimization over the domain. The algorithm stands in contrast with more classical proximal gradient algorithms with smoothing, which require the computation of proximal operators at each iteration and can therefore be impractical for high-dimensional problems. We establish the theoretical convergence rate of Semi-Proximal Mirror-Prox, which exhibits the optimal complexity bounds, i.e. O(1/ϵ2)O(1/\epsilon^2), for the number of calls to linear minimization oracle. We present promising experimental results showing the interest of the approach in comparison to competing methods
    • …
    corecore