29 research outputs found

    Consistency of Generalized Finite Difference Schemes for the Stochastic HJB Equation

    Get PDF
    We analyse a class of numerical schemes for solving the HJB equation for stochastic control problems, that generalizes the usual finite difference method. The latter is known to be monotonous, and hence valid, only if the scaled covariance matrix is diagonal dominant. We generalize this result by, given the set of neighbouring points allowed to enter in the scheme, showing how to compute the class of covariance matrices that is consistent with this set of points. We perform this computation for several cases in dimension 2 to 4

    Inverse stochastic optimal controls

    Full text link
    We study an inverse problem of the stochastic optimal control of general diffusions with performance index having the quadratic penalty term of the control process. Under mild conditions on the drift, the volatility, the cost functions of the state, and under the assumption that the optimal control belongs to the interior of the control set, we show that our inverse problem is well-posed using a stochastic maximum principle. Then, with the well-posedness, we reduce the inverse problem to some root finding problem of the expectation of a random variable involved with the value function, which has a unique solution. Based on this result, we propose a numerical method for our inverse problem by replacing the expectation above with arithmetic mean of observed optimal control processes and the corresponding state processes. The recent progress of numerical analyses of Hamilton-Jacobi-Bellman equations enables the proposed method to be implementable for multi-dimensional cases. In particular, with the help of the kernel-based collocation method for Hamilton-Jacobi-Bellman equations, our method for the inverse problems still works well even when an explicit form of the value function is unavailable. Several numerical experiments show that the numerical method recover the unknown weight parameter with high accuracy

    The non-locality of Markov chain approximations to two-dimensional diffusions

    Full text link
    In this short paper, we consider discrete-time Markov chains on lattices as approximations to continuous-time diffusion processes. The approximations can be interpreted as finite difference schemes for the generator of the process. We derive conditions on the diffusion coefficients which permit transition probabilities to match locally first and second moments. We derive a novel formula which expresses how the matching becomes more difficult for larger (absolute) correlations and strongly anisotropic processes, such that instantaneous moves to more distant neighbours on the lattice have to be allowed. Roughly speaking, for non-zero correlations, the distance covered in one timestep is proportional to the ratio of volatilities in the two directions. We discuss the implications to Markov decision processes and the convergence analysis of approximations to Hamilton-Jacobi-Bellman equations in the Barles-Souganidis framework.Comment: Corrected two errata from previous and journal version: definition of R in (5) and summations in (7

    Consistency of a simple multidimensional scheme for Hamilton–Jacobi–Bellman equations

    Get PDF
    International audienceThis Note presents an approximation scheme for second-order Hamilton-Jacobi-Bellman equations arising in stochastic optimal control. The scheme is based on a Markov chain approximation method. It is easy to implement in any dimension. The consistency of the scheme is proved, which guarantees its convergence. To cite this article: R. Munos, H. Zidani, C. R. Acad. Sci. Paris, Ser. I 340 (2005)
    corecore