286 research outputs found

    Stochastic Differential Games and Viscosity Solutions of Hamilton-Jacobi-Bellman-Isaacs Equations

    Full text link
    In this paper we study zero-sum two-player stochastic differential games with the help of theory of Backward Stochastic Differential Equations (BSDEs). At the one hand we generalize the results of the pioneer work of Fleming and Souganidis by considering cost functionals defined by controlled BSDEs and by allowing the admissible control processes to depend on events occurring before the beginning of the game (which implies that the cost functionals become random variables), on the other hand the application of BSDE methods, in particular that of the notion of stochastic "backward semigroups" introduced by Peng allows to prove a dynamic programming principle for the upper and the lower value functions of the game in a straight-forward way, without passing by additional approximations. The upper and the lower value functions are proved to be the unique viscosity solutions of the upper and the lower Hamilton-Jacobi-Bellman-Isaacs equations, respectively. For this Peng's BSDE method is translated from the framework of stochastic control theory into that of stochastic differential games.Comment: The results were presented by Rainer Buckdahn at the "12th International Symposium on Dynamic Games and Applications" in Sophia-Antipolis (France) in June 2006; They were also reported by Juan Li at 2nd Workshop on "Stochastic Equations and Related Topics" in Jena (Germany) in July 2006 and at one seminar in the ETH of Zurich in November 200

    Stochastic Verification Theorem of Forward-Backward Controlled Systems for Viscosity Solutions

    Full text link
    In this paper, we investigate the controlled system described by forward-backward stochastic differential equations with the control contained in drift, diffusion and generator of BSDE. A new verification theorem is derived within the framework of viscosity solutions without involving any derivatives of the value functions. It is worth to pointing out that this theorem has wider applicability than the restrictive classical verification theorems. As a relevant problem, the optimal stochastic feedback controls for forward-backward system are discussed as well

    Regularity properties for general HJB equations. A BSDE method

    Full text link
    In this work we investigate regularity properties of a large class of Hamilton-Jacobi-Bellman (HJB) equations with or without obstacles, which can be stochastically interpreted in form of a stochastic control system which nonlinear cost functional is defined with the help of a backward stochastic differential equation (BSDE) or a reflected BSDE (RBSDE). More precisely, we prove that, firstly, the unique viscosity solution V(t,x)V(t,x) of such a HJB equation over the time interval [0,T],[0,T], with or without an obstacle, and with terminal condition at time TT, is jointly Lipschitz in (t,x)(t,x), for tt running any compact subinterval of [0,T)[0,T). Secondly, for the case that VV solves a HJB equation without an obstacle or with an upper obstacle it is shown under appropriate assumptions that V(t,x)V(t,x) is jointly semiconcave in (t,x)(t,x). These results extend earlier ones by Buckdahn, Cannarsa and Quincampoix [1]. Our approach embeds their idea of time change into a BSDE analysis. We also provide an elementary counter-example which shows that, in general, for the case that VV solves a HJB equation with a lower obstacle the semi-concavity doesn't hold true.Comment: 30 page

    Value in mixed strategies for zero-sum stochastic differential games without Isaacs condition

    Full text link
    In the present work, we consider 2-person zero-sum stochastic differential games with a nonlinear pay-off functional which is defined through a backward stochastic differential equation. Our main objective is to study for such a game the problem of the existence of a value without Isaacs condition. Not surprising, this requires a suitable concept of mixed strategies which, to the authors' best knowledge, was not known in the context of stochastic differential games. For this, we consider nonanticipative strategies with a delay defined through a partition π\pi of the time interval [0,T][0,T]. The underlying stochastic controls for the both players are randomized along π\pi by a hazard which is independent of the governing Brownian motion, and knowing the information available at the left time point tj−1t_{j-1} of the subintervals generated by π\pi, the controls of Players 1 and 2 are conditionally independent over [tj−1,tj)[t_{j-1},t_j). It is shown that the associated lower and upper value functions WπW^{\pi} and UπU^{\pi} converge uniformly on compacts to a function VV, the so-called value in mixed strategies, as the mesh of π\pi tends to zero. This function VV is characterized as the unique viscosity solution of the associated Hamilton-Jacobi-Bellman-Isaacs equation.Comment: Published in at http://dx.doi.org/10.1214/13-AOP849 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Integral-Partial Differential Equations of Isaacs' Type Related to Stochastic Differential Games with Jumps

    Full text link
    In this paper we study zero-sum two-player stochastic differential games with jumps with the help of theory of Backward Stochastic Differential Equations (BSDEs). We generalize the results of Fleming and Souganidis [10] and those by Biswas [3] by considering a controlled stochastic system driven by a d-dimensional Brownian motion and a Poisson random measure and by associating nonlinear cost functionals defined by controlled BSDEs. Moreover, unlike the both papers cited above we allow the admissible control processes of both players to depend on all events occurring before the beginning of the game. This quite natural extension allows the players to take into account such earlier events, and it makes even easier to derive the dynamic programming principle. The price to pay is that the cost functionals become random variables and so also the upper and the lower value functions of the game are a priori random fields. The use of a new method allows to prove that, in fact, the upper and the lower value functions are deterministic. On the other hand, the application of BSDE methods [18] allows to prove a dynamic programming principle for the upper and the lower value functions in a very straight-forward way, as well as the fact that they are the unique viscosity solutions of the upper and the lower integral-partial differential equations of Hamilton-Jacobi-Bellman-Isaacs' type, respectively. Finally, the existence of the value of the game is got in this more general setting if Isaacs' condition holds.Comment: 30 pages
    • …
    corecore