26 research outputs found

    Metric Subregularity and the Proximal Point Method

    Get PDF
    We examine the linear convergence rates of variants of the proximal point method for finding zeros of maximal monotone operators. We begin by showing how metric subregularity is sufficient for linear convergence to a zero of a maximal monotone operator. This result is then generalized to obtain convergence rates for the problem of finding a common zero of multiple monotone operators by considering randomized and averaged proximal methods.Comment: 14 page

    Optimization methods and stability of inclusions in Banach spaces

    Get PDF
    Our paper deals with the interrelation of optimization methods and Lipschitz stability of multifunctions in arbitrary Banach spaces. Roughly speaking, we show that linear convergence of several first order methods and Lipschitz stability mean the same. Particularly, we characterize calmness and the Aubin property by uniformly (with respect to certain starting points) linear convergence of descent methods and approximate projection methods. So we obtain, e.g., solution methods (for solving equations or variational problems) which require calmness only. The relations of these methods to several known basic algorithms are discussed, and errors in the subroutines as well as deformations of the given mappings are permitted. We also recall how such deformations are related to standard algorithms like barrier, penalty or regularization methods in optimizatio

    Optimization Methods and Stability of Inclusionsin Banach Spaces

    Get PDF
    Our paper deals with the interrelation of optimization methods and Lipschitz stability of multifunctions in arbitrary Banach spaces. Roughly speaking, we show that linear convergence of several first order methods and Lipschitz stability mean the same. Particularly, we characterize calmness and the Aubin property by uniformly (with respect to certain starting points) linear convergence of descent methods and approximate projection methods. So we obtain, e.g., solution methods (for solving equations or variational problems) which require calmness only. The relations of these methods to several known basic algorithms are discussed, and errors in the subroutines as well as deformations of the given mappings are permitted. We also recall how such deformations are related to standard algorithms like barrier, penalty or regularization methods in optimization

    Variational Analysis Down Under Open Problem Session

    Get PDF
    © 2018, Springer Science+Business Media, LLC, part of Springer Nature. We state the problems discussed in the open problem session at Variational Analysis Down Under conference held in honour of Prof. Asen Dontchev on 19–21 February 2018 at Federation University Australia

    Generalized Newton's Method based on Graphical Derivatives

    Get PDF
    This paper concerns developing a numerical method of the Newton type to solve systems of nonlinear equations described by nonsmooth continuous functions. We propose and justify a new generalized Newton algorithm based on graphical derivatives, which have never been used to derive a Newton-type method for solving nonsmooth equations. Based on advanced techniques of variational analysis and generalized differentiation, we establish the well-posedness of the algorithm, its local superlinear convergence, and its global convergence of the Kantorovich type. Our convergence results hold with no semismoothness assumption, which is illustrated by examples. The algorithm and main results obtained in the paper are compared with well-recognized semismooth and BB-differentiable versions of Newton's method for nonsmooth Lipschitzian equations

    Calculus of the exponent of Kurdyka-{\L}ojasiewicz inequality and its applications to linear convergence of first-order methods

    Full text link
    In this paper, we study the Kurdyka-{\L}ojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we develop various calculus rules to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents. In addition, we show that the well-studied Luo-Tseng error bound together with a mild assumption on the separation of stationary values implies that the KL exponent is 12\frac12. The Luo-Tseng error bound is known to hold for a large class of concrete structured optimization problems, and thus we deduce the KL exponent of a large class of functions whose exponents were previously unknown. Building upon this and the calculus rules, we are then able to show that for many convex or nonconvex optimization models for applications such as sparse recovery, their objective function's KL exponent is 12\frac12. This includes the least squares problem with smoothly clipped absolute deviation (SCAD) regularization or minimax concave penalty (MCP) regularization and the logistic regression problem with ℓ1\ell_1 regularization. Since many existing local convergence rate analysis for first-order methods in the nonconvex scenario relies on the KL exponent, our results enable us to obtain explicit convergence rate for various first-order methods when they are applied to a large variety of practical optimization models. Finally, we further illustrate how our results can be applied to establishing local linear convergence of the proximal gradient algorithm and the inertial proximal algorithm with constant step-sizes for some specific models that arise in sparse recovery.Comment: The paper is accepted for publication in Foundations of Computational Mathematics: https://link.springer.com/article/10.1007/s10208-017-9366-8. In this update, we fill in more details to the proof of Theorem 4.1 concerning the nonemptiness of the projection onto the set of stationary point
    corecore