39 research outputs found

    Successive Convexification of Non-Convex Optimal Control Problems and Its Convergence Properties

    Full text link
    This paper presents an algorithm to solve non-convex optimal control problems, where non-convexity can arise from nonlinear dynamics, and non-convex state and control constraints. This paper assumes that the state and control constraints are already convex or convexified, the proposed algorithm convexifies the nonlinear dynamics, via a linearization, in a successive manner. Thus at each succession, a convex optimal control subproblem is solved. Since the dynamics are linearized and other constraints are convex, after a discretization, the subproblem can be expressed as a finite dimensional convex programming subproblem. Since convex optimization problems can be solved very efficiently, especially with custom solvers, this subproblem can be solved in time-critical applications, such as real-time path planning for autonomous vehicles. Several safe-guarding techniques are incorporated into the algorithm, namely virtual control and trust regions, which add another layer of algorithmic robustness. A convergence analysis is presented in continuous- time setting. By doing so, our convergence results will be independent from any numerical schemes used for discretization. Numerical simulations are performed for an illustrative trajectory optimization example.Comment: Updates: corrected wordings for LICQ. This is the full version. A brief version of this paper is published in 2016 IEEE 55th Conference on Decision and Control (CDC). http://ieeexplore.ieee.org/document/7798816

    An algorithm for linearly constrained nonlinear programming problems

    Get PDF
    AbstractIn this paper an algorithm for solving a linearly constrained nonlinear programming problem is developed. Given a feasible point, a correction vector is computed by solving a least distance programming problem over a polyhedral cone defined in terms of the gradients of the “almost” binding constraints. Mukai's approximate scheme for computing the step size is generalized to handle the constraints. This scheme provides an estimate for the step size based on a quadratic approximation of the function. This estimate is used in conjunction with Armijo line search to calculate a new point. It is shown that each accumulation point is a Kuhn-Tucker point to a slight perturbation of the original problem. Furthermore, under suitable second order optimality conditions, it is shown that eventually only one trial is needed to compute the step size

    Approximation and Convergence in Nonlinear Optimization

    Get PDF
    We show that the theory of e-convergence, originally developed to study approximation techniques, is also useful in the analysis of the convergence properties of algorithmic procedures for nonlinear optimization problems
    corecore