74 research outputs found

    Solving Optimal Control Problems for Delayed Control-Affine Systems with Quadratic Cost by Numerical Continuation

    Full text link
    - In this paper we introduce a new method to solve fixed-delay optimal control problems which exploits numerical homotopy procedures. It is known that solving this kind of problems via indirect methods is complex and computationally demanding because their implementation is faced with two difficulties: the extremal equations are of mixed type, and besides, the shooting method has to be carefully initialized. Here, starting from the solution of the non-delayed version of the optimal control problem, the delay is introduced by numerical homotopy methods. Convergence results, which ensure the effectiveness of the whole procedure, are provided. The numerical efficiency is illustrated on an example

    Exact Characterization of the Convex Hulls of Reachable Sets

    Full text link
    We study the convex hulls of reachable sets of nonlinear systems with bounded disturbances. Reachable sets play a critical role in control, but remain notoriously challenging to compute, and existing over-approximation tools tend to be conservative or computationally expensive. In this work, we exactly characterize the convex hulls of reachable sets as the convex hulls of solutions of an ordinary differential equation from all possible initial values of the disturbances. This finite-dimensional characterization unlocks a tight estimation algorithm to over-approximate reachable sets that is significantly faster and more accurate than existing methods. We present applications to neural feedback loop analysis and robust model predictive control

    First-Order Pontryagin Maximum Principle for Risk-Averse Stochastic Optimal Control Problems

    Full text link
    In this paper, we derive a set of first-order Pontryagin optimality conditions for a risk-averse stochastic optimal control problem subject to final time inequality constraints, and whose cost is a general finite coherent risk measure. Unlike previous contributions in the literature, our analysis holds for classical stochastic differential equations driven by standard Brownian motions. Moreover, it presents the advantages of neither involving second-order adjoint equations, nor leading to the so-called weak version of the PMP, in which the maximization condition with respect to the control variable is replaced by the stationarity of the Hamiltonian

    A Gradient Descent-Ascent Method for Continuous-Time Risk-Averse Optimal Control

    Full text link
    In this paper, we consider continuous-time stochastic optimal control problems where the cost is evaluated through a coherent risk measure. We provide an explicit gradient descent-ascent algorithm which applies to problems subject to non-linear stochastic differential equations. More specifically, we leverage duality properties of coherent risk measures to relax the problem via a smooth min-max reformulation which induces artificial strong concavity in the max subproblem. We then formulate necessary conditions of optimality for this relaxed problem which we leverage to prove convergence of the gradient descent-ascent algorithm to candidate solutions of the original problem. Finally, we showcase the efficiency of our algorithm through numerical simulations involving trajectory tracking problems and highlight the benefit of favoring risk measures over classical expectation

    Risk-Averse Trajectory Optimization via Sample Average Approximation

    Full text link
    Trajectory optimization under uncertainty underpins a wide range of applications in robotics. However, existing methods are limited in terms of reasoning about sources of epistemic and aleatoric uncertainty, space and time correlations, nonlinear dynamics, and non-convex constraints. In this work, we first introduce a continuous-time planning formulation with an average-value-at-risk constraint over the entire planning horizon. Then, we propose a sample-based approximation that unlocks an efficient, general-purpose, and time-consistent algorithm for risk-averse trajectory optimization. We prove that the method is asymptotically optimal and derive finite-sample error bounds. Simulations demonstrate the high speed and reliability of the approach on problems with stochasticity in nonlinear dynamics, obstacle fields, interactions, and terrain parameters

    Sequential Convex Programming For Non-Linear Stochastic Optimal Control

    Full text link
    This work introduces a sequential convex programming framework to solve general non-linear, finite-dimensional stochastic optimal control problems, where uncertainties are modeled by a multidimensional Wiener process. We provide sufficient conditions for the convergence of the method. Moreover, we prove that when convergence is achieved, sequential convex programming finds a candidate locally-optimal solution for the original problem in the sense of the stochastic Pontryagin Maximum Principle. We then leverage these properties to design a practical numerical method for solving non-linear stochastic optimal control problems based on a deterministic transcription of stochastic sequential convex programming.Comment: Free-final-time problems with stochastic controls are now discussed in a separate sectio

    Analysis of Theoretical and Numerical Properties of Sequential Convex Programming for Continuous-Time Optimal Control

    Full text link
    Sequential Convex Programming (SCP) has recently gained significant popularity as an effective method for solving optimal control problems and has been successfully applied in several different domains. However, the theoretical analysis of SCP has received comparatively limited attention, and it is often restricted to discrete-time formulations. In this paper, we present a unifying theoretical analysis of a fairly general class of SCP procedures for continuous-time optimal control problems. In addition to the derivation of convergence guarantees in a continuous-time setting, our analysis reveals two new numerical and practical insights. First, we show how one can more easily account for manifold-type constraints, which are a defining feature of optimal control of mechanical systems. Second, we show how our theoretical analysis can be leveraged to accelerate SCP-based optimal control methods by infusing techniques from indirect optimal control
    corecore