804 research outputs found

    Augmented Lagrangian and differentiable exact penalty methods

    Get PDF
    Bibliography: leaves 13-14."July 1981""National Science Foundation Grant no. NSF/ECS 79-20834."Dimitri P. Bertsekas

    Multiplier methods for engineering optimization

    Get PDF
    International audienceMultiplier methods used to solve the constrained engineering optimization problem are described. These methods solve the problem by minimizing a sequence of unconstrained problems defined using the cost and constraint functions. The methods, proposed in 1969, have been determined to be quite robust, although not as efficient as other algorithms. They can be more effective for some engineering applications, such as optimum design and control oflarge scale dynamic systems. Since 1969 several modifications and extensions of the methods have been developed. Therefore, it is important to review the theory and computational procedures of these methods so that more efficient and effective ones can be developed for engineering applications. Recent methods that are similar to the multiplier methods are also discussed. These are continuous multiplier update, exact penalty and exponential penalty methods

    Maintaining the Positive Definiteness of the Matrices in Reduced Secant Methods for Equality Constrained Optimization

    Get PDF
    This paper proposes an algorithm for minimizing a function f on R^n in the presence of m equality constraints c that locally is a reduced secant method. The local method is globalized using a nondifferentiable augmented Lagrangian whose decrease is obtained by both a longitudinal search that decreases mainly f and a transversal search that decreases mainly ||c||. The main objective of the paper is to show that the longitudinal path can be designed in order to maintain the positive definiteness of the reduced matrices by means of the positivity of gamma_{k}^{T}, where gamma_{k} is the change in the reduced gradient and bk is the reduced longitudinal displacement

    A Primal-Dual Quasi-Newton Method for Constrained Optimization

    Get PDF
    One of the most important developments in nonlinear constrained optimization in recent years has been the recursive quadratic programming (RQP) method suggested by Wilson, Han, Powell and many other researchers. It is clear that the role of the auxiliary quadratic programming problem is to calculate (implicitly) the inverse Hessian of the dual objective function. We describe the Hessian of the Lagrangian and that of the dual objective function as the primal Hessian and the dual Hessian, respectively. In this paper, a new method for constrained optimization, called the primal-dual quasi-Newton method, is proposed. The main feature of this method is that it improves (explicitly) both the primal Hessian and the dual Hessian using quasi-Newton methods. Several variants of the primal-dual quasi-Newton method are possible: the properties of these methods are described and the computational results obtained for some test problems are given

    A second derivative SQP method: local convergence

    Get PDF
    In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact ℓ1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud \ud Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ℓ1-penalty function over a sequence of increasing values of the penalty parameter.\ud \ud Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension
    • …
    corecore