350 research outputs found

    A finite-dimensional approximation for partial differential equations on Wasserstein space

    Full text link
    This paper presents a finite-dimensional approximation for a class of partial differential equations on the space of probability measures. These equations are satisfied in the sense of viscosity solutions. The main result states the convergence of the viscosity solutions of the finite-dimensional PDE to the viscosity solutions of the PDE on Wasserstein space, provided that uniqueness holds for the latter, and heavily relies on an adaptation of the Barles & Souganidis monotone scheme to our context, as well as on a key precompactness result for semimartingale measures. We illustrate this result with the example of the Hamilton-Jacobi-Bellman and Bellman-Isaacs equations arising in stochastic control and differential games, and propose an extension to the case of path-dependent PDEs

    Stochastic Differential Games and Viscosity Solutions of Hamilton-Jacobi-Bellman-Isaacs Equations

    Full text link
    In this paper we study zero-sum two-player stochastic differential games with the help of theory of Backward Stochastic Differential Equations (BSDEs). At the one hand we generalize the results of the pioneer work of Fleming and Souganidis by considering cost functionals defined by controlled BSDEs and by allowing the admissible control processes to depend on events occurring before the beginning of the game (which implies that the cost functionals become random variables), on the other hand the application of BSDE methods, in particular that of the notion of stochastic "backward semigroups" introduced by Peng allows to prove a dynamic programming principle for the upper and the lower value functions of the game in a straight-forward way, without passing by additional approximations. The upper and the lower value functions are proved to be the unique viscosity solutions of the upper and the lower Hamilton-Jacobi-Bellman-Isaacs equations, respectively. For this Peng's BSDE method is translated from the framework of stochastic control theory into that of stochastic differential games.Comment: The results were presented by Rainer Buckdahn at the "12th International Symposium on Dynamic Games and Applications" in Sophia-Antipolis (France) in June 2006; They were also reported by Juan Li at 2nd Workshop on "Stochastic Equations and Related Topics" in Jena (Germany) in July 2006 and at one seminar in the ETH of Zurich in November 200

    Stochastic differential games for fully coupled FBSDEs with jumps

    Full text link
    This paper is concerned with stochastic differential games (SDGs) defined through fully coupled forward-backward stochastic differential equations (FBSDEs) which are governed by Brownian motion and Poisson random measure. For SDGs, the upper and the lower value functions are defined by the controlled fully coupled FBSDEs with jumps. Using a new transformation introduced in [6], we prove that the upper and the lower value functions are deterministic. Then, after establishing the dynamic programming principle for the upper and the lower value functions of this SDGs, we prove that the upper and the lower value functions are the viscosity solutions to the associated upper and the lower Hamilton-Jacobi-Bellman-Isaacs (HJBI) equations, respectively. Furthermore, for a special case (when σ, h\sigma,\ h do not depend on y, z, ky,\ z,\ k), under the Isaacs' condition, we get the existence of the value of the game.Comment: 33 page

    Lyapunov stabilizability of controlled diffusions via a superoptimality principle for viscosity solutions

    Full text link
    We prove optimality principles for semicontinuous bounded viscosity solutions of Hamilton-Jacobi-Bellman equations. In particular we provide a representation formula for viscosity supersolutions as value functions of suitable obstacle control problems. This result is applied to extend the Lyapunov direct method for stability to controlled Ito stochastic differential equations. We define the appropriate concept of Lyapunov function to study the stochastic open loop stabilizability in probability and the local and global asymptotic stabilizability (or asymptotic controllability). Finally we illustrate the theory with some examples.Comment: 22 page

    Uniqueness Results for Second Order Bellman-Isaacs Equations under Quadratic Growth Assumptions and Applications

    Get PDF
    In this paper, we prove a comparison result between semicontinuous viscosity sub and supersolutions growing at most quadratically of second-order degenerate parabolic Hamilton-Jacobi-Bellman and Isaacs equations. As an application, we characterize the value function of a finite horizon stochastic control problem with unbounded controls as the unique viscosity solution of the corresponding dynamic programming equation
    corecore