472 research outputs found

    A Primal-Dual Augmented Lagrangian

    Get PDF
    Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primal-dual generalization of the Hestenes-Powell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primal-dual variants of conventional primal methods are proposed: a primal-dual bound constrained Lagrangian (pdBCL) method and a primal-dual β„“\ell1 linearly constrained Lagrangian (pdβ„“\ell1-LCL) method

    Constrained Optimization via Exact Augmented Lagrangian and Randomized Iterative Sketching

    Full text link
    We consider solving equality-constrained nonlinear, nonconvex optimization problems. This class of problems appears widely in a variety of applications in machine learning and engineering, ranging from constrained deep neural networks, to optimal control, to PDE-constrained optimization. We develop an adaptive inexact Newton method for this problem class. In each iteration, we solve the Lagrangian Newton system inexactly via a randomized iterative sketching solver, and select a suitable stepsize by performing line search on an exact augmented Lagrangian merit function. The randomized solvers have advantages over deterministic linear system solvers by significantly reducing per-iteration flops complexity and storage cost, when equipped with suitable sketching matrices. Our method adaptively controls the accuracy of the randomized solver and the penalty parameters of the exact augmented Lagrangian, to ensure that the inexact Newton direction is a descent direction of the exact augmented Lagrangian. This allows us to establish a global almost sure convergence. We also show that a unit stepsize is admissible locally, so that our method exhibits a local linear convergence. Furthermore, we prove that the linear convergence can be strengthened to superlinear convergence if we gradually sharpen the adaptive accuracy condition on the randomized solver. We demonstrate the superior performance of our method on benchmark nonlinear problems in CUTEst test set, constrained logistic regression with data from LIBSVM, and a PDE-constrained problem.Comment: 25 pages, 4 figure

    Local convergence of a sequential quadratic programming method for a class of nonsmooth nonconvex objectives

    Full text link
    A sequential quadratic programming (SQP) algorithm is designed for nonsmooth optimization problems with upper-C^2 objective functions. Upper-C^2 functions are locally equivalent to difference-of-convex (DC) functions with smooth convex parts. They arise naturally in many applications such as certain classes of solutions to parametric optimization problems, e.g., recourse of stochastic programming, and projection onto closed sets. The proposed algorithm conducts line search and adopts an exact penalty merit function. The potential inconsistency due to the linearization of constraints are addressed through relaxation, similar to that of Sl_1QP. We show that the algorithm is globally convergent under reasonable assumptions. Moreover, we study the local convergence behavior of the algorithm under additional assumptions of Kurdyka-{\L}ojasiewicz (KL) properties, which have been applied to many nonsmooth optimization problems. Due to the nonconvex nature of the problems, a special potential function is used to analyze local convergence. We show that under acceptable assumptions, upper bounds on local convergence can be proven. Additionally, we show that for a large number of optimization problems with upper-C^2 objectives, their corresponding potential functions are indeed KL functions. Numerical experiment is performed with a power grid optimization problem that is consistent with the assumptions and analysis in this paper

    A Preconditioned Inexact Active-Set Method for Large-Scale Nonlinear Optimal Control Problems

    Full text link
    We provide a global convergence proof of the recently proposed sequential homotopy method with an inexact Krylov--semismooth-Newton method employed as a local solver. The resulting method constitutes an active-set method in function space. After discretization, it allows for efficient application of Krylov-subspace methods. For a certain class of optimal control problems with PDE constraints, in which the control enters the Lagrangian only linearly, we propose and analyze an efficient, parallelizable, symmetric positive definite preconditioner based on a double Schur complement approach. We conclude with numerical results for a badly conditioned and highly nonlinear benchmark optimization problem with elliptic partial differential equations and control bounds. The resulting method is faster than using direct linear algebra for the 2D benchmark and allows for the parallel solution of large 3D problems.Comment: 26 page

    A Sequential Quadratic Programming Method for Optimization with Stochastic Objective Functions, Deterministic Inequality Constraints and Robust Subproblems

    Full text link
    In this paper, a robust sequential quadratic programming method of [1] for constrained optimization is generalized to problem with stochastic objective function, deterministic equality and inequality constraints. A stochastic line search scheme in [2] is employed to globalize the steps. We show that in the case where the algorithm fails to terminate in finite number of iterations, the sequence of iterates will converge almost surely to a Karush-Kuhn-Tucker point under the assumption of extended Mangasarian-Fromowitz constraint qualification. We also show that, with a specific sampling method, the probability of the penalty parameter approaching infinity is 0. Encouraging numerical results are reported

    Global convergence of a stabilized sequential quadratic semidefinite programming method for nonlinear semidefinite programs without constraint qualifications

    Full text link
    In this paper, we propose a new sequential quadratic semidefinite programming (SQSDP) method for solving nonlinear semidefinite programs (NSDPs), in which we produce iteration points by solving a sequence of stabilized quadratic semidefinite programming (QSDP) subproblems, which we derive from the minimax problem associated with the NSDP. Differently from the existing SQSDP methods, the proposed one allows us to solve those QSDP subproblems just approximately so as to ensure global convergence. One more remarkable point of the proposed method is that any constraint qualifications (CQs) are not required in the global convergence analysis. Specifically, under some assumptions without CQs, we prove the global convergence to a point satisfying any of the following: the stationary conditions for the feasibility problem; the approximate-Karush-Kuhn-Tucker (AKKT) conditions; the trace-AKKT conditions. The latter two conditions are the new optimality conditions for the NSDP presented by Andreani et al. (2018) in place of the Karush-Kuhn-Tucker conditions. Finally, we conduct some numerical experiments to examine the efficiency of the proposed method
    • …
    corecore