4,902 research outputs found

    Regularization of Linear Ill-posed Problems by the Augmented Lagrangian Method and Variational Inequalities

    Full text link
    We study the application of the Augmented Lagrangian Method to the solution of linear ill-posed problems. Previously, linear convergence rates with respect to the Bregman distance have been derived under the classical assumption of a standard source condition. Using the method of variational inequalities, we extend these results in this paper to convergence rates of lower order, both for the case of an a priori parameter choice and an a posteriori choice based on Morozov's discrepancy principle. In addition, our approach allows the derivation of convergence rates with respect to distance measures different from the Bregman distance. As a particular application, we consider sparsity promoting regularization, where we derive a range of convergence rates with respect to the norm under the assumption of restricted injectivity in conjunction with generalized source conditions of H\"older type

    A Primal-Dual Algorithmic Framework for Constrained Convex Minimization

    Get PDF
    We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.Comment: This paper consists of 54 pages with 7 tables and 12 figure

    Lagrange optimality system for a class of nonsmooth convex optimization

    Get PDF
    In this paper, we revisit the augmented Lagrangian method for a class of nonsmooth convex optimization. We present the Lagrange optimality system of the augmented Lagrangian associated with the problems, and establish its connections with the standard optimality condition and the saddle point condition of the augmented Lagrangian, which provides a powerful tool for developing numerical algorithms. We apply a linear Newton method to the Lagrange optimality system to obtain a novel algorithm applicable to a variety of nonsmooth convex optimization problems arising in practical applications. Under suitable conditions, we prove the nonsingularity of the Newton system and the local convergence of the algorithm.Comment: 19 page

    Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems

    Get PDF
    The present article presents a summarizing view at differential-algebraic equations (DAEs) and analyzes how new application fields and corresponding mathematical models lead to innovations both in theory and in numerical analysis for this problem class. Recent numerical methods for nonsmooth dynamical systems subject to unilateral contact and friction illustrate the topicality of this development.Comment: Preprint of Book Chapte

    Nonconvex Generalization of ADMM for Nonlinear Equality Constrained Problems

    Full text link
    The ever-increasing demand for efficient and distributed optimization algorithms for large-scale data has led to the growing popularity of the Alternating Direction Method of Multipliers (ADMM). However, although the use of ADMM to solve linear equality constrained problems is well understood, we lacks a generic framework for solving problems with nonlinear equality constraints, which are common in practical applications (e.g., spherical constraints). To address this problem, we are proposing a new generic ADMM framework for handling nonlinear equality constraints, neADMM. After introducing the generalized problem formulation and the neADMM algorithm, the convergence properties of neADMM are discussed, along with its sublinear convergence rate o(1/k)o(1/k), where kk is the number of iterations. Next, two important applications of neADMM are considered and the paper concludes by describing extensive experiments on several synthetic and real-world datasets to demonstrate the convergence and effectiveness of neADMM compared to existing state-of-the-art methods

    An Extragradient-Based Alternating Direction Method for Convex Minimization

    Get PDF
    In this paper, we consider the problem of minimizing the sum of two convex functions subject to linear linking constraints. The classical alternating direction type methods usually assume that the two convex functions have relatively easy proximal mappings. However, many problems arising from statistics, image processing and other fields have the structure that while one of the two functions has easy proximal mapping, the other function is smoothly convex but does not have an easy proximal mapping. Therefore, the classical alternating direction methods cannot be applied. To deal with the difficulty, we propose in this paper an alternating direction method based on extragradients. Under the assumption that the smooth function has a Lipschitz continuous gradient, we prove that the proposed method returns an ϵ\epsilon-optimal solution within O(1/ϵ)O(1/\epsilon) iterations. We apply the proposed method to solve a new statistical model called fused logistic regression. Our numerical experiments show that the proposed method performs very well when solving the test problems. We also test the performance of the proposed method through solving the lasso problem arising from statistics and compare the result with several existing efficient solvers for this problem; the results are very encouraging indeed
    • …
    corecore