5 research outputs found

    Singularly perturbed forward-backward stochastic differential equations: application to the optimal control of bilinear systems

    Get PDF
    We study linear-quadratic stochastic optimal control problems with bilinear state dependence for which the underlying stochastic differential equation (SDE) consists of slow and fast degrees of freedom. We show that, in the same way in which the underlying dynamics can be well approximated by a reduced order effective dynamics in the time scale limit (using classical homogenziation results), the associated optimal expected cost converges in the time scale limit to an effective optimal cost. This entails that we can well approximate the stochastic optimal control for the whole system by the reduced order stochastic optimal control, which is clearly easier to solve because of lower dimensionality. The approach uses an equivalent formulation of the Hamilton-Jacobi-Bellman (HJB) equation, in terms of forward-backward SDEs (FBSDEs). We exploit the efficient solvability of FBSDEs via a least squares Monte Carlo algorithm and show its applicability by a suitable numerical example

    Variational approach to rare event simulation using least-squares regression

    Get PDF
    We propose an adaptive importance sampling scheme for the simulation of rare events when the underlying dynamics is given by a diffusion. The scheme is based on a Gibbs variational principle that is used to determine the optimal (i.e. zero-variance) change of measure and exploits the fact that the latter can be rephrased as a stochastic optimal control problem. The control problem can be solved by a stochastic approximation algorithm, using the Feynman-Kac representation of the associated dynamic programming equations, and we discuss numerical aspects for high-dimensional problems along with simple toy examples.Comment: 28 pages, 7 figure

    Singularly Perturbed Forward-Backward Stochastic Differential Equations: Application to the Optimal Control of Bilinear Systems

    Get PDF
    We study linear-quadratic stochastic optimal control problems with bilinear state dependence where the underlying stochastic differential equation (SDE) has multiscale features. We show that, in the same way in which the underlying dynamics can be well approximated by a reduced-order dynamics in the scale separation limit (using classical homogenization results), the associated optimal expected cost converges to an effective optimal cost in the scale separation limit. This entails that we can approximate the stochastic optimal control for the whole system by a reduced-order stochastic optimal control, which is easier to compute because of the lower dimensionality of the problem. The approach uses an equivalent formulation of the Hamilton-Jacobi-Bellman (HJB) equation, in terms of forward-backward SDEs (FBSDEs). We exploit the efficient solvability of FBSDEs via a least squares Monte Carlo algorithm and show its applicability by a suitable numerical example

    Model reduction and uncertainty quantification of multiscale diffusions with parameter uncertainties using nonlinear expectations

    Full text link
    In this paper we study model reduction of linear and bilinear quadratic stochastic control problems with parameter uncertainties. Specifically, we consider slow-fast systems with unknown diffusion coefficient and study the convergence of the slow process in the limit of infinite scale separation. The aim of our work is two-fold: Firstly, we want to propose a general framework for averaging and homogenisation of multiscale systems with parametric uncertainties in the drift or in the diffusion coefficient. Secondly, we want to use this framework to quantify the uncertainty in the reduced system by deriving a limit equation that represents a worst-case scenario for any given (possibly path-dependent) quantity of interest. We do so by reformulating the slow-fast system as an optimal control problem in which the unknown parameter plays the role of a control variable that can take values in a closed bounded set. For systems with unknown diffusion coefficient, the underlying stochastic control problem admits an interpretation in terms of a stochastic differential equation driven by a G-Brownian motion. We prove convergence of the slow process with respect to the nonlinear expectation on the probability space induced by the G-Brownian motion. The idea here is to formulate the nonlinear dynamic programming equation of the underlying control problem as a forward-backward stochastic differential equation in the G-Brownian motion framework (in brief: G-FBSDE), for which convergence can be proved by standard means. We illustrate the theoretical findings with two simple numerical examples, exploiting the connection between fully nonlinear dynamic programming equations and second-order BSDE (2BSDE): a linear quadratic Gaussian regulator problem and a bilinear multiplicative triad that is a standard benchmark system in turbulence and climate modelling.Comment: 22 pages, 4 figure
    corecore