24 research outputs found

    Optimal stopping in a general framework

    Get PDF
    We study the optimal stopping time problem v(S)=esssup⁥Ξ≄SE[ϕ(Ξ)∣FS]v(S)={\rm ess}\sup_{\theta \geq S} E[\phi(\theta)|\mathcal {F}_S], for any stopping time SS, where the reward is given by a family (ϕ(Ξ),Ξ∈T0)(\phi(\theta),\theta\in\mathcal{T}_0) \emph{of non negative random variables} indexed by stopping times. We solve the problem under weak assumptions in terms of integrability and regularity of the reward family. More precisely, we only suppose v(0)<+∞v(0) < + \infty and (ϕ(Ξ),Ξ∈T0) (\phi(\theta),\theta\in \mathcal{T}_0) upper semicontinuous along stopping times in expectation. We show the existence of an optimal stopping time and obtain a characterization of the minimal and the maximal optimal stopping times. We also provide some local properties of the value function family. All the results are written in terms of families of random variables and are proven by only using classical results of the Probability Theory

    A Piecewise Deterministic Markov Toy Model for Traffic/Maintenance and Associated Hamilton-Jacobi Integrodifferential Systems on Networks

    Full text link
    We study optimal control problems in infinite horizon when the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (inspired by traffic models). We adapt the results in [H. M. Soner. Optimal control with state-space constraint. II. SIAM J. Control Optim., 24(6):1110.1122, 1986] to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov's ''shaking the coefficients'' method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product, the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton-Jacobi integrodifferential system. This ensures that the value function satisfies Perron's preconization for the (unique) candidate to viscosity solution. Finally, we prove that the same kind of linearization can be obtained by combining linearization for classical (unconstrained) problems and cost penalization. The latter method works for very general near-viable systems (possibly without further controllability) and discontinuous costs.Comment: accepted to Applied Mathematics and Optimization (01/10/2015

    Optimal multiple stopping time problem

    Full text link
    We study the optimal multiple stopping time problem defined for each stopping time SS by v(S)=ess⁥supâĄÏ„1,...,τd≄SE[ψ(τ1,...,τd)∣FS]v(S)=\operatorname {ess}\sup_{\tau_1,...,\tau_d\geq S}E[\psi(\tau_1,...,\tau_d)|\mathcal{F}_S]. The key point is the construction of a new reward ϕ\phi such that the value function v(S)v(S) also satisfies v(S)=ess⁥sup⁥Ξ≄SE[ϕ(Ξ)∣FS]v(S)=\operatorname {ess}\sup_{\theta\geq S}E[\phi(\theta)|\mathcal{F}_S]. This new reward ϕ\phi is not a right-continuous adapted process as in the classical case, but a family of random variables. For such a reward, we prove a new existence result for optimal stopping times under weaker assumptions than in the classical case. This result is used to prove the existence of optimal multiple stopping times for v(S)v(S) by a constructive method. Moreover, under strong regularity assumptions on ψ\psi, we show that the new reward ϕ\phi can be aggregated by a progressive process. This leads to new applications, particularly in finance (applications to American options with multiple exercise times).Comment: Published in at http://dx.doi.org/10.1214/10-AAP727 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Optimal double stopping time

    Get PDF
    We consider the optimal double stopping time problem defined for each stopping time SS by v(S)=\esssup\{E[\psi(\tau_1, \tau_2) | \F_S], \tau_1, \tau_2 \geq S \}. Following the optimal one stopping time problem, we study the existence of optimal stopping times and give a method to compute them. The key point is the construction of a {\em new reward} ϕ\phi such that the value function v(S)v(S) satisfies v(S)=\esssup\{E[\phi(\tau) | \F_S], \tau \geq S \}. Finally, we give an example of an american option with double exercise time.Comment: 6 page

    Dynkin games in a general framework

    Full text link
    We revisit the Dynkin game problem in a general framework, improve classical results and relax some assumptions. The criterion is expressed in terms of families of random variables indexed by stopping times. We construct two nonnegative supermartingales families JJ and Jâ€ČJ' whose finitness is equivalent to the Mokobodski's condition. Under some weak right-regularity assumption, the game is shown to be fair and J−Jâ€ČJ-J' is shown to be the common value function. Existence of saddle points is derived under some weak additional assumptions. All the results are written in terms of random variables and are proven by using only classical results of probability theory.Comment: stochastics, Published online: 10 Apr 201

    Erratum: Optimal stopping time problem in a general framework

    Get PDF
    The proof of the second point of Proposition B11 given in the Appendix of Kobylanski and Quenez (2012) ([1]) is only valid in the case where the reward process is right-continuous. In this Erratum, we give the proof in the case where the reward is only right-upper-semicontinuous

    Optimal multiple stopping time problem

    Get PDF
    We study the optimal multiple stopping time problem defined for each stopping time SS by \displaystyle{ v(S)=\esssup_ {\tau_1,\cdots,\tau_d \geq S } E[\psi( \tau_1,\cdots,\tau_d)\, |\,\F_S]\,}.\\ The key point is the construction of a {\em new reward} ϕ\phi such that the value function v(S)v(S) satisfies also v(S)=\esssup_ {\theta \geq S } \,E[\phi(\theta)\, |\,\F_S]\,. This new reward ϕ\phi is not a right continuous adapted process as in the classical case but a family of random variables. For such a reward, we prove a new existence result of optimal stopping times under weaker assumptions than in the classical case. This result is used to prove the existence of optimal multiple stopping times for v(S)v(S) by a constructive method. Moreover, under strong regularity assumptions on ψ\psi, we show that the new reward ϕ\phi can be aggregated by a progressive process. This leads to different applications in particular in finance for American options with multiple exercise times

    Dynkin games in a general framework

    Get PDF
    We revisit the Dynkin game problem in a general framework and relax some assumptions. The payoffs and the criterion are expressed in terms of families of random variables indexed by stopping times. We construct two nonnegative supermartingales families JJ and Jâ€ČJ' whose finitness is equivalent to the Mokobodski's condition. Under some weak right-regularity assumption on the payoff families, the game is shown to be fair and J−Jâ€ČJ-J' is shown to be the common value function. Existence of saddle points is derived under some weak additional assumptions. All the results are written in terms of random variables and are proven by using only classical results of probability theory

    Large Deviations Principle by viscosity solutions: The case of diffusions with oblique Lipschitz reflections

    Get PDF
    International audienceWe establish a Large Deviations Principle for diffusions with Lipschitz continuous oblique reflections on regular do- mains. The rate functional is given as the value function of a control problem and is proved to be good. The proof is based on a viscosity solution approach. The idea consists in interpreting the probabilities as the solutions to some PDEs, make the logarithmic transform, pass to the limit, and then identify the action functional as the solution of the limiting equation
    corecore