1,156 research outputs found

    Galerkin approximations for the optimal control of nonlinear delay differential equations

    Get PDF
    Optimal control problems of nonlinear delay differential equations (DDEs) are considered for which we propose a general Galerkin approximation scheme built from Koornwinder polynomials. Error estimates for the resulting Galerkin-Koornwinder approximations to the optimal control and the value function, are derived for a broad class of cost functionals and nonlinear DDEs. The approach is illustrated on a delayed logistic equation set not far away from its Hopf bifurcation point in the parameter space. In this case, we show that low-dimensional controls for a standard quadratic cost functional can be efficiently computed from Galerkin-Koornwinder approximations to reduce at a nearly optimal cost the oscillation amplitude displayed by the DDE's solution. Optimal controls computed from the Pontryagin's maximum principle (PMP) and the Hamilton-Jacobi-Bellman equation (HJB) associated with the corresponding ODE systems, are shown to provide numerical solutions in good agreement. It is finally argued that the value function computed from the corresponding reduced HJB equation provides a good approximation of that obtained from the full HJB equation.Comment: 29 pages. This is a sequel of the arXiv preprint arXiv:1704.0042

    Initialization of the Shooting Method via the Hamilton-Jacobi-Bellman Approach

    Get PDF
    The aim of this paper is to investigate from the numerical point of view the possibility of coupling the Hamilton-Jacobi-Bellman (HJB) equation and Pontryagin's Minimum Principle (PMP) to solve some control problems. A rough approximation of the value function computed by the HJB method is used to obtain an initial guess for the PMP method. The advantage of our approach over other initialization techniques (such as continuation or direct methods) is to provide an initial guess close to the global minimum. Numerical tests involving multiple minima, discontinuous control, singular arcs and state constraints are considered. The CPU time for the proposed method is less than four minutes up to dimension four, without code parallelization

    A Model for Optimal Human Navigation with Stochastic Effects

    Full text link
    We present a method for optimal path planning of human walking paths in mountainous terrain, using a control theoretic formulation and a Hamilton-Jacobi-Bellman equation. Previous models for human navigation were entirely deterministic, assuming perfect knowledge of the ambient elevation data and human walking velocity as a function of local slope of the terrain. Our model includes a stochastic component which can account for uncertainty in the problem, and thus includes a Hamilton-Jacobi-Bellman equation with viscosity. We discuss the model in the presence and absence of stochastic effects, and suggest numerical methods for simulating the model. We discuss two different notions of an optimal path when there is uncertainty in the problem. Finally, we compare the optimal paths suggested by the model at different levels of uncertainty, and observe that as the size of the uncertainty tends to zero (and thus the viscosity in the equation tends to zero), the optimal path tends toward the deterministic optimal path

    Use of approximations of Hamilton-Jacobi-Bellman inequality for solving periodic optimization problems

    Full text link
    We show that necessary and sufficient conditions of optimality in periodic optimization problems can be stated in terms of a solution of the corresponding HJB inequality, the latter being equivalent to a max-min type variational problem considered on the space of continuously differentiable functions. We approximate the latter with a maximin problem on a finite dimensional subspace of the space of continuously differentiable functions and show that a solution of this problem (existing under natural controllability conditions) can be used for construction of near optimal controls. We illustrate the construction with a numerical example.Comment: 29 pages, 2 figure

    Some numerical methods for solving stochastic impulse control in natural gas storage facilities

    Get PDF
    The valuation of gas storage facilities is characterized as a stochastic impulse control problem with finite horizon resulting in Hamilton-Jacobi-Bellman (HJB) equations for the value function. In this context the two catagories of solving schemes for optimal switching are discussed in a stochastic control framework. We reviewed some numerical methods which include approaches related to partial differential equations (PDEs), Markov chain approximation, nonparametric regression, quantization method and some practitioners’ methods. This paper considers optimal switching problem arising in valuation of gas storage contracts for leasing the storage facilities, and investigates the recent developments as well as their advantages and disadvantages of each scheme based on dynamic programming principle (DPP

    Error estimates for a tree structure algorithm solving finite horizon control problems

    Full text link
    In the Dynamic Programming approach to optimal control problems a crucial role is played by the value function that is characterized as the unique viscosity solution of a Hamilton-Jacobi-Bellman (HJB) equation. It is well known that this approach suffers of the "curse of dimensionality" and this limitation has reduced its practical in real world applications. Here we analyze a dynamic programming algorithm based on a tree structure. The tree is built by the time discrete dynamics avoiding in this way the use of a fixed space grid which is the bottleneck for high-dimensional problems, this also drops the projection on the grid in the approximation of the value function. We present some error estimates for a first order approximation based on the tree-structure algorithm. Moreover, we analyze a pruning technique for the tree to reduce the complexity and minimize the computational effort. Finally, we present some numerical tests
    corecore