62 research outputs found

    Robustness of Stochastic Optimal Control to Approximate Diffusion Models under Several Cost Evaluation Criteria

    Full text link
    In control theory, typically a nominal model is assumed based on which an optimal control is designed and then applied to an actual (true) system. This gives rise to the problem of performance loss due to the mismatch between the true model and the assumed model. A robustness problem in this context is to show that the error due to the mismatch between a true model and an assumed model decreases to zero as the assumed model approaches the true model. We study this problem when the state dynamics of the system are governed by controlled diffusion processes. In particular, we will discuss continuity and robustness properties of finite horizon and infinite-horizon α\alpha-discounted/ergodic optimal control problems for a general class of non-degenerate controlled diffusion processes, as well as for optimal control up to an exit time. Under a general set of assumptions and a convergence criterion on the models, we first establish that the optimal value of the approximate model converges to the optimal value of the true model. We then establish that the error due to mismatch that occurs by application of a control policy, designed for an incorrectly estimated model, to a true model decreases to zero as the incorrect model approaches the true model. We will see that, compared to related results in the discrete-time setup, the continuous-time theory will let us utilize the strong regularity properties of solutions to optimality (HJB) equations, via the theory of uniformly elliptic PDEs, to arrive at strong continuity and robustness properties.Comment: 33 page

    Yet again on iteration improvement for averaged expected cost control for 1D ergodic diffusions

    Full text link
    The paper is a full version of the short presentation in \cite{amv17}. Ergodic control for one-dimensional controlled diffusion is tackled; both drift and diffusion coefficients may depend on a strategy which is assumed markovian. Ergodic HJB equation is established and existence and uniqueness of its solution is proved, as well as the convergence of the reward improvement algorithm.Comment: 28 pages, 30 reference

    Convex operator-theoretic methods in stochastic control

    Full text link
    This paper is about operator-theoretic methods for solving nonlinear stochastic optimal control problems to global optimality. These methods leverage on the convex duality between optimally controlled diffusion processes and Hamilton-Jacobi-Bellman (HJB) equations for nonlinear systems in an ergodic Hilbert-Sobolev space. In detail, a generalized Bakry-Emery condition is introduced under which one can establish the global exponential stabilizability of a large class of nonlinear systems. It is shown that this condition is sufficient to ensure the existence of solutions of the ergodic HJB for stochastic optimal control problems on infinite time horizons. Moreover, a novel dynamic programming recursion for bounded linear operators is introduced, which can be used to numerically solve HJB equations by a Galerkin projection
    • …
    corecore