319 research outputs found

    Optimal stopping under adverse nonlinear expectation and related games

    Full text link
    We study the existence of optimal actions in a zero-sum game infτsupPEP[Xτ]\inf_{\tau}\sup_PE^P[X_{\tau}] between a stopper and a controller choosing a probability measure. This includes the optimal stopping problem infτE(Xτ)\inf_{\tau}\mathcal{E}(X_{\tau}) for a class of sublinear expectations E()\mathcal{E}(\cdot) such as the GG-expectation. We show that the game has a value. Moreover, exploiting the theory of sublinear expectations, we define a nonlinear Snell envelope YY and prove that the first hitting time inf{t:Yt=Xt}\inf\{t:Y_t=X_t\} is an optimal stopping time. The existence of a saddle point is shown under a compactness condition. Finally, the results are applied to the subhedging of American options under volatility uncertainty.Comment: Published at http://dx.doi.org/10.1214/14-AAP1054 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Second order reflected backward stochastic differential equations

    Full text link
    In this article, we build upon the work of Soner, Touzi and Zhang [Probab. Theory Related Fields 153 (2012) 149-190] to define a notion of a second order backward stochastic differential equation reflected on a lower c\`adl\`ag obstacle. We prove existence and uniqueness of the solution under a Lipschitz-type assumption on the generator, and we investigate some links between our reflected 2BSDEs and nonclassical optimal stopping problems. Finally, we show that reflected 2BSDEs provide a super-hedging price for American options in a market with volatility uncertainty.Comment: Published in at http://dx.doi.org/10.1214/12-AAP906 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: text overlap with arXiv:1003.6053 by other author

    Nonzero-sum stochastic differential games between an impulse controller and a stopper

    Get PDF
    We study a two-player nonzero-sum stochastic differential game, where one player controls the state variable via additive impulses, while the other player can stop the game at any time. The main goal of this work is to characterize Nash equilibria through a verification theorem, which identifies a new system of quasivariational inequalities, whose solution gives equilibrium payoffs with the correspondent strategies. Moreover, we apply the verification theorem to a game with a one-dimensional state variable, evolving as a scaled Brownian motion, and with linear payoff and costs for both players. Two types of Nash equilibrium are fully characterized, i.e. semi-explicit expressions for the equilibrium strategies and associated payoffs are provided. Both equilibria are of threshold type: in one equilibrium players’ intervention are not simultaneous, while in the other one the first player induces her competitor to stop the game. Finally, we provide some numerical results describing the qualitative properties of both types of equilibrium

    Mixed generalized Dynkin game and stochastic control in a Markovian framework

    Full text link
    We introduce a mixed {\em generalized} Dynkin game/stochastic control with Ef{\cal E}^f-expectation in a Markovian framework. We study both the case when the terminal reward function is supposed to be Borelian only and when it is continuous. We first establish a weak dynamic programming principle by using some refined results recently provided in \cite{DQS} and some properties of doubly reflected BSDEs with jumps (DRBSDEs). We then show a stronger dynamic programming principle in the continuous case, which cannot be derived from the weak one. In particular, we have to prove that the value function of the problem is continuous with respect to time tt, which requires some technical tools of stochastic analysis and some new results on DRBSDEs. We finally study the links between our mixed problem and generalized Hamilton Jacobi Bellman variational inequalities in both cases

    Drift Control with Discretionary Stopping for a Diffusion Process

    Full text link
    We consider stochastic control with discretionary stopping for the drift of a diffusion process over an infinite time horizon. The objective is to choose a control process and a stopping time to minimize the expectation of a convex terminal cost in the presence of a fixed operating cost and a control-dependent running cost per unit of elapsed time. Under appropriate conditions on the coefficients of the controlled diffusion, an optimal pair of control and stopping rules is shown to exist. Moreover, under the same assumptions, it is shown that the optimal control is a constant which can be computed fairly explicitly; and that it is optimal to stop the first time an appropriate interval is visited. We consider also a constrained version of the above problem, in which an upper bound on the expectation of available stopping times is imposed; we show that this constrained problem can be reduced to an unconstrained problem with some appropriate change of parameters and, as a result, solved by similar arguments.Comment: 22 page
    corecore