67 research outputs found

    Regularized Mixed Variational-Like Inequalities

    Get PDF
    We use auxiliary principle technique coupled with iterative regularization method to suggest and analyze some new iterative methods for solving mixed variational-like inequalities. The convergence analysis of these new iterative schemes is considered under some suitable conditions. Some special cases are also discussed. Our method of proofs is very simple as compared with other methods. Our results represent a significant refinement of the previously known results

    Calculus of the exponent of Kurdyka-{\L}ojasiewicz inequality and its applications to linear convergence of first-order methods

    Full text link
    In this paper, we study the Kurdyka-{\L}ojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we develop various calculus rules to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents. In addition, we show that the well-studied Luo-Tseng error bound together with a mild assumption on the separation of stationary values implies that the KL exponent is 12\frac12. The Luo-Tseng error bound is known to hold for a large class of concrete structured optimization problems, and thus we deduce the KL exponent of a large class of functions whose exponents were previously unknown. Building upon this and the calculus rules, we are then able to show that for many convex or nonconvex optimization models for applications such as sparse recovery, their objective function's KL exponent is 12\frac12. This includes the least squares problem with smoothly clipped absolute deviation (SCAD) regularization or minimax concave penalty (MCP) regularization and the logistic regression problem with ā„“1\ell_1 regularization. Since many existing local convergence rate analysis for first-order methods in the nonconvex scenario relies on the KL exponent, our results enable us to obtain explicit convergence rate for various first-order methods when they are applied to a large variety of practical optimization models. Finally, we further illustrate how our results can be applied to establishing local linear convergence of the proximal gradient algorithm and the inertial proximal algorithm with constant step-sizes for some specific models that arise in sparse recovery.Comment: The paper is accepted for publication in Foundations of Computational Mathematics: https://link.springer.com/article/10.1007/s10208-017-9366-8. In this update, we fill in more details to the proof of Theorem 4.1 concerning the nonemptiness of the projection onto the set of stationary point

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Optimization and Applications

    Get PDF
    Proceedings of a workshop devoted to optimization problems, their theory and resolution, and above all applications of them. The topics covered existence and stability of solutions; design, analysis, development and implementation of algorithms; applications in mechanics, telecommunications, medicine, operations research

    Formulation, analysis and solution algorithms for a model of gradient plasticity within a discontinuous Galerkin framework

    Get PDF
    Includes bibliographical references (p. [221]-239).An investigation of a model of gradient plasticity in which the classical von Mises yield function is augmented by a term involving the Laplacian of the equivalent plastic strain is presented. The theory is developed within the framework of non-smooth convex analysis by exploiting the equivalence between the primal and dual expressions of the plastic deformation evolution relations. The nonlocal plastic evolution relations for the case of gradient plasticity are approximated using a discontinuous Galerkin finite element formulation. Both the small- and finite-strain theories are investigated. Considerable attention is focused on developing a firm mathematical foundation for the model of gradient plasticity restricted to the infinitesimal-strain regime. The key contributions arising from the analysis of the classical plasticity problem and the model of gradient plasticity include demonstrating the consistency of the variational formulation, and analyses of both the continuous-in-time and fully-discrete approximations; the error estimates obtained correspond to those for the conventional Galerkin approximations of the classical problem. The focus of the analysis is on those properties of the problem that would ensure existence of a unique solution for both hardening and softening problems. It is well known that classical finite element method simulations of softening problems are pathologically dependent on the discretisation

    A proximal method for composite minimization

    Get PDF
    Abstract. We consider minimization of functions that are compositions of prox-regular functions with smooth vector functions. A wide variety of important optimization problems can be formulated in this way. We describe a subproblem constructed from a linearized approximation to the objective and a regularization term, investigating the properties of local solutions of this subproblem and showing that they eventually identify a manifold containing the solution of the original problem. We propose an algorithmic framework based on this subproblem and prove a global convergence result
    • ā€¦
    corecore