5 research outputs found

    A sparse Markov chain approximation of LQ-type stochastic control problems

    Get PDF
    We propose a novel Galerkin discretization scheme for stochastic optimal control problems on an indefinite time horizon. The control problems are linear-quadratic in the controls, but possibly nonlinear in the state variables, and the discretization is based on the fact that problems of this kind admit a dual formulation in terms of linear boundary value problems. We show that the discretized linear problem is dual to a Markov decision problem, prove an L2L^{2} error bound for the general scheme and discuss the sparse discretization using a basis of so-called committor functions as a special case; the latter is particularly suited when the dynamics are metastable, e.g., when controlling biomolecular systems. We illustrate the method with several numerical examples, one being the optimal control of Alanine dipeptide to its helical conformation

    A Radial Basis Collocation Method for Hamilton-Jacobi-Bellman Equations

    No full text
    In this paper we propose a semi-meshless discretization method for the approximation of viscosity solutions to a first order Hamilton–Jacobi–Bellman (HJB) equation governing a class of nonlinear optimal feedback control problems. In this method, the spatial discretization is based on a collocation scheme using the global radial basis functions (RBFs) and the time variable is discretized by a standard two-level time-stepping scheme with a splitting parameter θ. A stability analysis is performed, showing that even for the explicit scheme that θ = 0, the method is stable in time. Since the time discretization is consistent, the method is also convergent in time. Numerical results, performed to verify the usefulness of the method, demonstrate that the method gives accurate approximations to both of the control and state variables

    Higher-Order Methods for Determining Optimal Controls and Their Sensitivities

    Get PDF
    The solution of optimal control problems through the Hamilton-Jacobi-Bellman (HJB) equation offers guaranteed satisfaction of both the necessary and sufficient conditions for optimality. However, finding an exact solution to the HJB equation is a near impossible task for many optimal control problems. This thesis presents an approximation method for solving finite-horizon optimal control problems involving nonlinear dynamical systems. The method uses finite-order approximations of the partial derivatives of the cost-to-go function, and successive higher-order differentiations of the HJB equation. Natural byproducts of the proposed method provide sensitivities of the controls to changes in the initial states, which can be used to approximate the solution to neighboring optimal control problems. For highly nonlinear problems, the method is modified to calculate control sensitivities about a nominal trajectory. In this framework, the method is shown to provide accurate control sensitivities at much lower orders of approximation. Several numerical examples are presented to illustrate both applications of the approximation method
    corecore