4 research outputs found

    Solving the Hamilton-Jacobi-Bellman Equation for a Stochastic System with State Constraints

    Get PDF
    We present a method for solving the Hamilton-Jacobi-Bellman (HJB) equation for a stochastic system with state constraints. A variable transformation is introduced which turns the HJB equation into a combination of a linear eigenvalue problem, a set of partial differential equations (PDE:s), and a point-wise equation. For a fixed solution to the eigenvalue problem, the PDE:s are linear and the point-wise equation is quadratic, indicating that the problem can be solved efficiently using an iterative scheme. As an example, we numerically solve for the optimal control of a Linear Quadratic Gaussian (LQG) system with state constraints. A reasonably accurate solution is obtained even with a very small number of collocation points (three in each dimension), which suggests that the method could be used on high order systems, mitigating the curse of dimensionality

    Stochastic optimal control of state constrained systems

    No full text
    Contains fulltext : 96369.pdf (publisher's version ) (Closed access
    corecore