954 research outputs found

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Fast algorithms for smooth and monotone covariance matrix estimation

    Get PDF
    In this thesis the problem of interest is, within the setting of financial risk management, covariance matrix estimation from limited number of high dimensional independent identically distributed (i.i.d.) multivariate samples when the random variables of interest have a natural spatial indexing along a low-dimensional manifold, e.g., along a line. Sample covariance matrix estimate is fraught with peril in this context. A variety of approaches to improve the covariance estimates have been developed by exploiting knowledge of structure in the data, which, however, in general impose very strict structure. We instead exploit another formulation which assumes that the covariance matrix is smooth and monotone with respect to the spatial indexing. Originally the formulation is derived from the estimation problem within a convex-optimization framework, and the resulting semidefinite-programming problem (SDP) is solved by an interior-point method (IPM). However, solving SDP via an IPM can become unduly computationally expensive for large covariance matrices. Motivated by this observation, this thesis develops highly efficient first-order solvers for smooth and monotone covariance matrix estimation. We propose two types of solvers for covariance matrix estimation: first based on projected gradients, and then based on recently developed optimal first order methods. Given such numerical algorithms, we present a comprehensive experimental analysis. We first demonstrate the benefits of imposing smoothness and monotonicity constraints in covariance matrix estimation in a number of scenarios, involving limited, missing, and asynchronous data. We then demonstrate the potential computational benefits offered by first order methods through a detailed comparison to solution of the problem via IPMs

    Frontiers in Nonparametric Statistics

    Get PDF
    The goal of this workshop was to discuss recent developments of nonparametric statistical inference. A particular focus was on high dimensional statistics, semiparametrics, adaptation, nonparametric bayesian statistics, shape constraint estimation and statistical inverse problems. The close interaction of these issues with optimization, machine learning and inverse problems has been addressed as well

    A General Framework for Fast Stagewise Algorithms

    Full text link
    Forward stagewise regression follows a very simple strategy for constructing a sequence of sparse regression estimates: it starts with all coefficients equal to zero, and iteratively updates the coefficient (by a small amount ϵ\epsilon) of the variable that achieves the maximal absolute inner product with the current residual. This procedure has an interesting connection to the lasso: under some conditions, it is known that the sequence of forward stagewise estimates exactly coincides with the lasso path, as the step size ϵ\epsilon goes to zero. Furthermore, essentially the same equivalence holds outside of least squares regression, with the minimization of a differentiable convex loss function subject to an ℓ1\ell_1 norm constraint (the stagewise algorithm now updates the coefficient corresponding to the maximal absolute component of the gradient). Even when they do not match their ℓ1\ell_1-constrained analogues, stagewise estimates provide a useful approximation, and are computationally appealing. Their success in sparse modeling motivates the question: can a simple, effective strategy like forward stagewise be applied more broadly in other regularization settings, beyond the ℓ1\ell_1 norm and sparsity? The current paper is an attempt to do just this. We present a general framework for stagewise estimation, which yields fast algorithms for problems such as group-structured learning, matrix completion, image denoising, and more.Comment: 56 pages, 15 figure
    • …
    corecore