20 research outputs found

    Implementation of an Optimal First-Order Method for Strongly Convex Total Variation Regularization

    Get PDF
    We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to ÎĽ\mu-strongly convex objective functions with LL-Lipschitz continuous gradient. In the framework of Nesterov both ÎĽ\mu and LL are assumed known -- an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient ÎĽ\mu and LL during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss the iteration complexity of several first-order methods, including the proposed algorithm, and we use a 3D tomography problem to compare the performance of these methods. The results show that for ill-conditioned problems solved to high accuracy, the proposed method significantly outperforms state-of-the-art first-order methods, as also suggested by theoretical results.Comment: 23 pages, 4 figure

    Non-negative Least Squares via Overparametrization

    Full text link
    In many applications, solutions of numerical problems are required to be non-negative, e.g., when retrieving pixel intensity values or physical densities of a substance. In this context, non-negative least squares (NNLS) is a ubiquitous tool, e.g., when seeking sparse solutions of high-dimensional statistical problems. Despite vast efforts since the seminal work of Lawson and Hanson in the '70s, the non-negativity assumption is still an obstacle for the theoretical analysis and scalability of many off-the-shelf solvers. In the different context of deep neural networks, we recently started to see that the training of overparametrized models via gradient descent leads to surprising generalization properties and the retrieval of regularized solutions. In this paper, we prove that, by using an overparametrized formulation, NNLS solutions can reliably be approximated via vanilla gradient flow. We furthermore establish stability of the method against negative perturbations of the ground-truth. Our simulations confirm that this allows the use of vanilla gradient descent as a novel and scalable numerical solver for NNLS. From a conceptual point of view, our work proposes a novel approach to trading side-constraints in optimization problems against complexity of the optimization landscape, which does not build upon the concept of Lagrangian multipliers

    First-order Convex Optimization Methods for Signal and Image Processing

    Get PDF
    In this thesis we investigate the use of first-order convex optimization methods applied to problems in signal and image processing. First we make a general introduction to convex optimization, first-order methods and their iteration com-plexity. Then we look at different techniques, which can be used with first-order methods such as smoothing, Lagrange multipliers and proximal gradient meth-ods. We continue by presenting different applications of convex optimization and notable convex formulations with an emphasis on inverse problems and sparse signal processing. We also describe the multiple-description problem. We finally present the contributions of the thesis. The remaining parts of the thesis consist of five research papers. The first paper addresses non-smooth first-order convex optimization and the trade-off between accuracy and smoothness of the approximating smooth function. The second and third papers concern discrete linear inverse problems and reliable numerical reconstruction software. The last two papers present a convex opti-mization formulation of the multiple-description problem and a method to solve it in the case of large-scale instances. i i

    Sparse Image Reconstruction in Computed Tomography

    Get PDF

    Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Least-Squares Image Deblurring

    No full text

    Primal-Dual Active-Set Methods for Convex Quadratic Optimization with Applications

    Get PDF
    Primal-dual active-set (PDAS) methods are developed for solving quadratic optimization problems (QPs). Such problems arise in their own right in optimal control and statistics–two applications of interest considered in this dissertation–and as subproblems when solving nonlinear optimization problems. PDAS methods are promising as they possess the same favorable properties as other active-set methods, such as their ability to be warm-started and to obtain highly accurate solutions by explicitly identifying sets of constraints that are active at an optimal solution. However, unlike traditional active-set methods, PDAS methods have convergence guarantees despite making rapid changes in active-set estimates, making them well suited for solving large-scale problems.Two PDAS variants are proposed for efficiently solving generally-constrained convex QPs. Both variants ensure global convergence of the iterates by enforcing montonicity in a measure of progress. Besides identifying an estimate set estimate, a novel uncertain set is introduced into the framework in order to house indices of variables that have been identified as being susceptible to cycling. The introduction of the uncertainty set guarantees convergence of the algorithm, and with techniques proposed to keep the set from expanding quickly, the practical performance of the algorithm is shown to be very efficient. Another PDAS variant is proposed for solving certain convex QPs that commonly arise when discretizing optimal control problems. The proposed framework allows inexactness in the subproblem solutions, which can significantly reduce computational cost in large-scale settings. By controlling the level inexactness either by exploiting knowledge of an upper bound of a matrix inverse or by dynamic estimation of such a value, the method achieves convergence guarantees and is shown to outperform a method that employs exact solutions computed by direct factorization techniques.Finally, the application of PDAS techniques for applications in statistics, variants are proposed for solving isotonic regression (IR) and trend filtering (TR) problems. It is shown that PDAS can solve an IR problem with n data points with only O(n) arithmetic operations. Moreover, the method is shown to outperform the state-of-the-art method for solving IR problems, especially when warm-starting is considered. Enhancements to themethod are proposed for solving general TF problems, and numerical results are presented to show that PDAS methods are viable for a broad class of such problems

    Regularization techniques based on Krylov subspace methods for ill-posed linear systems

    Get PDF
    This thesis is focussed on the regularization of large-scale linear discrete ill-posed problems. Problems of this kind arise in a variety of applications, and, in a continuous setting, they are often formulated as Fredholm integral equations of the first kind, with smooth kernel, modeling an inverse problem (i.e., the unknown of these equations is the cause of an observed effect). Upon discretization, linear systems whose coefficient matrix is ill-conditioned and whose right-hand side vector is affected by some perturbations (noise) must be solved. %Because of the ill-conditioning of the system matrix and the errors in the data, In this setting, a straightforward solution of the available linear system is meaningless because the computed solution would be dominated by errors; moreover, for large-scale problems, solving directly the available system could be computationally infeasible. Therefore, in order to recover a meaningful approximation of the original solution, some regularization must be employed, i.e., the original linear system must be replaced by a nearby problem having better numerical properties. The first part of this thesis (Chapter 1) gives an overview on inverse problems and briefly describes their properties in the continuous setting; then, in a discrete setting, the most well-known regularization techniques relying on some factorization of the system matrix are surveyed. The remaining part of the thesis is concerned with iterative regularization strategies based on some Krylov subspaces methods, which are well-suited for large-scale problems. More precisely, in Chapter 2, an extensive overview of the Krylov subspace methods most successfully employed with regularizing purposes is presented: historically, the first methods to be used were related to the normal equations and many issues linked to the analysis of their behavior have already been addressed. The situation is different for the methods based on the Arnoldi algorithm, whose regularizing properties are not well understood or widely accepted, yet. Therefore, still in Chapter 2, a novel analysis of the approximation properties of the Arnoldi algorithm when employed to solve linear discrete ill-posed problems is presented, in order to provide some insight on the use of Arnoldi-based methods for regularization purposes. The core results of this thesis are related to class of the Arnoldi-Tikhonov methods, first introduced about ten years ago, and described in Chapter 3. The Arnoldi-Tikhonov approach to regularization consists in solving a Tikhonov-regularized problem by means of an iterative strategy based on the Arnoldi algorithm. With respect to a purely iterative approach to regularization, Arnoldi-Tikhonov methods can deliver more accurate approximations by easily incorporating some information about the behavior of the solution within the reconstruction process. In connection with Arnoldi-Tikhonov methods, many open questions still remain, the most significant ones being the choice of the regularization parameters and the choice of the regularization matrices. The first issues are addressed in Chapter 4, where two new efficient and original parameter selection strategies to be employed with the Arnoldi-Tikhonov methods are derived and extensively tested; still in Chapter 4, a novel extension of the Arnoldi-Tikhonov method to the multi-parameter Tikhonov regularization case is described. Finally, in Chapter 5, two efficient and innovative schemes to approximate the solution of nonlinear regularized problems are presented: more precisely, the regularization terms originally defined by the 1-norm or by the Total Variation functional are approximated by adaptively updating suitable regularization matrices within the Arnoldi-Tikhonov iterations. Along this thesis, the results of many numerical experiments are presented in order to show the performance of the newly proposed methods, and to compare them with the already existing strategies

    Recent Techniques for Regularization in Partial Differential Equations and Imaging

    Get PDF
    abstract: Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain. This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges. Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems.Dissertation/ThesisDoctoral Dissertation Mathematics 201
    corecore