453,968 research outputs found

    Stochastic HJB Equations and Regular Singular Points

    Get PDF
    IIn this paper we show that some HJB equations arising from both finite and infinite horizon stochastic optimal control problems have a regular singular point at the origin. This makes them amenable to solution by power series techniques. This extends the work of Al'brecht who showed that the HJB equations of an infinite horizon deterministic optimal control problem can have a regular singular point at the origin, Al'brekht solved the HJB equations by power series, degree by degree. In particular, we show that the infinite horizon stochastic optimal control problem with linear dynamics, quadratic cost and bilinear noise leads to a new type of algebraic Riccati equation which we call the Stochastic Algebraic Riccati Equation (SARE). If SARE can be solved then one has a complete solution to this infinite horizon stochastic optimal control problem. We also show that a finite horizon stochastic optimal control problem with linear dynamics, quadratic cost and bilinear noise leads to a Stochastic Differential Riccati Equation (SDRE) that is well known. If these problems are the linear-quadratic-bilinear part of a nonlinear finite horizon stochastic optimal control problem then we show how the higher degree terms of the solutions can be computed degree by degree. To our knowledge this computation is new

    Control of Time-Varying Epidemic-Like Stochastic Processes and Their Mean-Field Limits

    Full text link
    The optimal control of epidemic-like stochastic processes is important both historically and for emerging applications today, where it can be especially important to include time-varying parameters that impact viral epidemic-like propagation. We connect the control of such stochastic processes with time-varying behavior to the stochastic shortest path problem and obtain solutions for various cost functions. Then, under a mean-field scaling, this general class of stochastic processes is shown to converge to a corresponding dynamical system. We analogously establish that the optimal control of this class of processes converges to the optimal control of the limiting dynamical system. Consequently, we study the optimal control of the dynamical system where the comparison of both controlled systems renders various important mathematical properties of interest.Comment: arXiv admin note: substantial text overlap with arXiv:1709.0798

    Singularly perturbed forward-backward stochastic differential equations: application to the optimal control of bilinear systems

    Get PDF
    We study linear-quadratic stochastic optimal control problems with bilinear state dependence for which the underlying stochastic differential equation (SDE) consists of slow and fast degrees of freedom. We show that, in the same way in which the underlying dynamics can be well approximated by a reduced order effective dynamics in the time scale limit (using classical homogenziation results), the associated optimal expected cost converges in the time scale limit to an effective optimal cost. This entails that we can well approximate the stochastic optimal control for the whole system by the reduced order stochastic optimal control, which is clearly easier to solve because of lower dimensionality. The approach uses an equivalent formulation of the Hamilton-Jacobi-Bellman (HJB) equation, in terms of forward-backward SDEs (FBSDEs). We exploit the efficient solvability of FBSDEs via a least squares Monte Carlo algorithm and show its applicability by a suitable numerical example

    Infinite Horizon and Ergodic Optimal Quadratic Control for an Affine Equation with Stochastic Coefficients

    Full text link
    We study quadratic optimal stochastic control problems with control dependent noise state equation perturbed by an affine term and with stochastic coefficients. Both infinite horizon case and ergodic case are treated. To this purpose we introduce a Backward Stochastic Riccati Equation and a dual backward stochastic equation, both considered in the whole time line. Besides some stabilizability conditions we prove existence of a solution for the two previous equations defined as limit of suitable finite horizon approximating problems. This allows to perform the synthesis of the optimal control

    Stochastic maximum principle for optimal control of SPDEs

    Get PDF
    In this note, we give the stochastic maximum principle for optimal control of stochastic PDEs in the general case (when the control domain need not be convex and the diffusion coefficient can contain a control variable)
    corecore