research

Stochastic Target Games and Dynamic Programming via Regularized Viscosity Solutions

Abstract

We study a class of stochastic target games where one player tries to find a strategy such that the state process almost-surely reaches a given target, no matter which action is chosen by the opponent. Our main result is a geometric dynamic programming principle which allows us to characterize the value function as the viscosity solution of a non-linear partial differential equation. Because abstract mea-surable selection arguments cannot be used in this context, the main obstacle is the construction of measurable almost-optimal strategies. We propose a novel approach where smooth supersolutions are used to define almost-optimal strategies of Markovian type, similarly as in ver-ification arguments for classical solutions of Hamilton--Jacobi--Bellman equations. The smooth supersolutions are constructed by an exten-sion of Krylov's method of shaken coefficients. We apply our results to a problem of option pricing under model uncertainty with different interest rates for borrowing and lending.Comment: To appear in MO

    Similar works