338 research outputs found

    A Simple and Efficient Algorithm for Nonlinear Model Predictive Control

    Full text link
    We present PANOC, a new algorithm for solving optimal control problems arising in nonlinear model predictive control (NMPC). A usual approach to this type of problems is sequential quadratic programming (SQP), which requires the solution of a quadratic program at every iteration and, consequently, inner iterative procedures. As a result, when the problem is ill-conditioned or the prediction horizon is large, each outer iteration becomes computationally very expensive. We propose a line-search algorithm that combines forward-backward iterations (FB) and Newton-type steps over the recently introduced forward-backward envelope (FBE), a continuous, real-valued, exact merit function for the original problem. The curvature information of Newton-type methods enables asymptotic superlinear rates under mild assumptions at the limit point, and the proposed algorithm is based on very simple operations: access to first-order information of the cost and dynamics and low-cost direct linear algebra. No inner iterative procedure nor Hessian evaluation is required, making our approach computationally simpler than SQP methods. The low-memory requirements and simple implementation make our method particularly suited for embedded NMPC applications

    On the Local and Global Convergence of a Reduced Quasi-Newton Method

    Get PDF
    In optimization in R^n with m nonlinear equality constraints, we study the local convergence of reduced quasi-Newton methods, in which the updated matrix is of order n-m. In particular, we give necessary and sufficient conditions for q-superlinear convergence (in one step). We introduce a device to globalize the local algorithm which consists in determining a step on an arc in order to decrease an exact penalty function. We give conditions so that asymptotically the step will be equal to one

    A second derivative SQP method: local convergence

    Get PDF
    In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact ℓ1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud \ud Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ℓ1-penalty function over a sequence of increasing values of the penalty parameter.\ud \ud Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set

    Maintaining the Positive Definiteness of the Matrices in Reduced Secant Methods for Equality Constrained Optimization

    Get PDF
    This paper proposes an algorithm for minimizing a function f on R^n in the presence of m equality constraints c that locally is a reduced secant method. The local method is globalized using a nondifferentiable augmented Lagrangian whose decrease is obtained by both a longitudinal search that decreases mainly f and a transversal search that decreases mainly ||c||. The main objective of the paper is to show that the longitudinal path can be designed in order to maintain the positive definiteness of the reduced matrices by means of the positivity of gamma_{k}^{T}, where gamma_{k} is the change in the reduced gradient and bk is the reduced longitudinal displacement

    Local Convergence of the Affine-Scaling Interior-Point Algorithm for Nonlinear Programming

    Get PDF
    This paper addresses the local convergence properties of the affine-scaling interior-point algorithm for nonlinear programming. The analysis of local convergence is developed in terms of parameters that control the interior-point scheme and the size of the residual of the linear system that provides the step direction. The analysis follows the classical theory for quasi-Newton methods and addresses q-linear, q-superlinear, and q-quadratic rates of convergence
    • …
    corecore