110,728 research outputs found

    A Unified Approach to the Global Exactness of Penalty and Augmented Lagrangian Functions I: Parametric Exactness

    Full text link
    In this two-part study we develop a unified approach to the analysis of the global exactness of various penalty and augmented Lagrangian functions for finite-dimensional constrained optimization problems. This approach allows one to verify in a simple and straightforward manner whether a given penalty/augmented Lagrangian function is exact, i.e. whether the problem of unconstrained minimization of this function is equivalent (in some sense) to the original constrained problem, provided the penalty parameter is sufficiently large. Our approach is based on the so-called localization principle that reduces the study of global exactness to a local analysis of a chosen merit function near globally optimal solutions. In turn, such local analysis can usually be performed with the use of sufficient optimality conditions and constraint qualifications. In the first paper we introduce the concept of global parametric exactness and derive the localization principle in the parametric form. With the use of this version of the localization principle we recover existing simple necessary and sufficient conditions for the global exactness of linear penalty functions, and for the existence of augmented Lagrange multipliers of Rockafellar-Wets' augmented Lagrangian. Also, we obtain completely new necessary and sufficient conditions for the global exactness of general nonlinear penalty functions, and for the global exactness of a continuously differentiable penalty function for nonlinear second-order cone programming problems. We briefly discuss how one can construct a continuously differentiable exact penalty function for nonlinear semidefinite programming problems, as well.Comment: 34 pages. arXiv admin note: text overlap with arXiv:1710.0196

    Augmented Lagrangian Functions for Cone Constrained Optimization: the Existence of Global Saddle Points and Exact Penalty Property

    Full text link
    In the article we present a general theory of augmented Lagrangian functions for cone constrained optimization problems that allows one to study almost all known augmented Lagrangians for cone constrained programs within a unified framework. We develop a new general method for proving the existence of global saddle points of augmented Lagrangian functions, called the localization principle. The localization principle unifies, generalizes and sharpens most of the known results on existence of global saddle points, and, in essence, reduces the problem of the existence of saddle points to a local analysis of optimality conditions. With the use of the localization principle we obtain first necessary and sufficient conditions for the existence of a global saddle point of an augmented Lagrangian for cone constrained minimax problems via both second and first order optimality conditions. In the second part of the paper, we present a general approach to the construction of globally exact augmented Lagrangian functions. The general approach developed in this paper allowed us not only to sharpen most of the existing results on globally exact augmented Lagrangians, but also to construct first globally exact augmented Lagrangian functions for equality constrained optimization problems, for nonlinear second order cone programs and for nonlinear semidefinite programs. These globally exact augmented Lagrangians can be utilized in order to design new superlinearly (or even quadratically) convergent optimization methods for cone constrained optimization problems.Comment: This is a preprint of an article published by Springer in Journal of Global Optimization (2018). The final authenticated version is available online at: http://dx.doi.org/10.1007/s10898-017-0603-

    On a conjecture in second-order optimality conditions

    Full text link
    In this paper we deal with optimality conditions that can be verified by a nonlinear optimization algorithm, where only a single Lagrange multiplier is avaliable. In particular, we deal with a conjecture formulated in [R. Andreani, J.M. Martinez, M.L. Schuverdt, "On second-order optimality conditions for nonlinear programming", Optimization, 56:529--542, 2007], which states that whenever a local minimizer of a nonlinear optimization problem fulfills the Mangasarian-Fromovitz Constraint Qualification and the rank of the set of gradients of active constraints increases at most by one in a neighborhood of the minimizer, a second-order optimality condition that depends on one single Lagrange multiplier is satisfied. This conjecture generalizes previous results under a constant rank assumption or under a rank deficiency of at most one. In this paper we prove the conjecture under the additional assumption that the Jacobian matrix has a smooth singular value decomposition, which is weaker than previously considered assumptions. We also review previous literature related to the conjecture.Comment: Extended Technical Repor

    Characterization of tilt stability via subgradient graphical derivative with applications to nonlinear programming

    Full text link
    This paper is devoted to the study of tilt stability in finite dimensional optimization via the approach of using the subgradient graphical derivative. We establish a new characterization of tilt-stable local minimizers for a broad class of unconstrained optimization problems in terms of a uniform positive definiteness of the subgradient graphical derivative of the objective function around the point in question. By applying this result to nonlinear programming under the metric subregularity constraint qualification, we derive a second-order characterization and several new sufficient conditions for tilt stability. In particular, we show that each stationary point of a nonlinear programming problem satisfying the metric subregularity constraint qualification is a tilt-stable local minimizer if the classical strong second-order sufficient condition holds

    Optimality Conditions for Nonlinear Semidefinite Programming via Squared Slack Variables

    Full text link
    In this work, we derive second-order optimality conditions for nonlinear semidefinite programming (NSDP) problems, by reformulating it as an ordinary nonlinear programming problem using squared slack variables. We first consider the correspondence between Karush-Kuhn-Tucker points and regularity conditions for the general NSDP and its reformulation via slack variables. Then, we obtain a pair of "no-gap" second-order optimality conditions that are essentially equivalent to the ones already considered in the literature. We conclude with the analysis of some computational prospects of the squared slack variables approach for NSDP.Comment: 20 pages, 3 figure

    Constraint Identification and Algorithm Stabilization for Degenerate Nonlinear Programs

    Full text link
    In the vicinity of a solution of a nonlinear programming problem at which both strict complementarity and linear independence of the active constraints may fail to hold, we describe a technique for distinguishing weakly active from strongly active constraints. We show that this information can be used to modify the sequential quadratic programming algorithm so that it exhibits superlinear convergence to the solution under assumptions weaker than those made in previous analyses.Comment: 21 page

    Existence of augmented Lagrange multipliers: reduction to exact penalty functions and localization principle

    Full text link
    In this article, we present new general results on existence of augmented Lagrange multipliers. We define a penalty function associated with an augmented Lagrangian, and prove that, under a certain growth assumption on the augmenting function, an augmented Lagrange multiplier exists if and only if this penalty function is exact. We also develop a new general approach to the study of augmented Lagrange multipliers called the localization principle. The localization principle allows one to study the local behaviour of the augmented Lagrangian near globally optimal solutions of the initial optimization problem in order to prove the existence of augmented Lagrange multipliers.Comment: This is a slightly edited verion of a pre-print of an article published in Mathematical Programming. The final authenticated version is available online at: https://doi.org/10.1007/s10107-017-1122-y In this version, a mistake in the proof of Theorem 4 was correcte

    Using Negative Curvature in Solving Nonlinear Programs

    Full text link
    Minimization methods that search along a curvilinear path composed of a non-ascent nega- tive curvature direction in addition to the direction of steepest descent, dating back to the late 1970s, have been an effective approach to finding a stationary point of a function at which its Hessian is positive semidefinite. For constrained nonlinear programs arising from recent appli- cations, the primary goal is to find a stationary point that satisfies the second-order necessary optimality conditions. Motivated by this, we generalize the approach of using negative curvature directions from unconstrained optimization to nonlinear ones. We focus on equality constrained problems and prove that our proposed negative curvature method is guaranteed to converge to a stationary point satisfying second-order necessary conditions. A possible way to extend our proposed negative curvature method to general nonlinear programs is also briefly discussed

    On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods

    Full text link
    We analyze sequences generated by interior point methods (IPMs) in convex and nonconvex settings. We prove that moving the primal feasibility at the same rate as the barrier parameter μ\mu ensures the Lagrange multiplier sequence remains bounded, provided the limit point of the primal sequence has a Lagrange multiplier. This result does not require constraint qualifications. We also guarantee the IPM finds a solution satisfying strict complementarity if one exists. On the other hand, if the primal feasibility is reduced too slowly, then the algorithm converges to a point of minimal complementarity; if the primal feasibility is reduced too quickly and the set of Lagrange multipliers is unbounded, then the norm of the Lagrange multiplier tends to infinity. Our theory has important implications for the design of IPMs. Specifically, we show that IPOPT, an algorithm that does not carefully control primal feasibility has practical issues with the dual multipliers values growing to unnecessarily large values. Conversely, the one-phase IPM of \citet*{hinder2018one}, an algorithm that controls primal feasibility as our theory suggests, has no such issue

    Exact augmented Lagrangian functions for nonlinear semidefinite programming

    Full text link
    In this paper, we study augmented Lagrangian functions for nonlinear semidefinite programming (NSDP) problems with exactness properties. The term exact is used in the sense that the penalty parameter can be taken appropriately, so a single minimization of the augmented Lagrangian recovers a solution of the original problem. This leads to reformulations of NSDP problems into unconstrained nonlinear programming ones. Here, we first establish a unified framework for constructing these exact functions, generalizing Di Pillo and Lucidi's work from 1996, that was aimed at solving nonlinear programming problems. Then, through our framework, we propose a practical augmented Lagrangian function for NSDP, proving that it is continuously differentiable and exact under the so-called nondegeneracy condition. We also present some preliminary numerical experiments.Comment: 26 pages. Added journal referenc
    • …
    corecore