65 research outputs found
Optimal stopping in a general framework
We study the optimal stopping time problem , for any stopping time , where the reward
is given by a family \emph{of non
negative random variables} indexed by stopping times. We solve the problem
under weak assumptions in terms of integrability and regularity of the reward
family. More precisely, we only suppose and upper semicontinuous along stopping
times in expectation. We show the existence of an optimal stopping time and
obtain a characterization of the minimal and the maximal optimal stopping
times. We also provide some local properties of the value function family. All
the results are written in terms of families of random variables and are proven
by only using classical results of the Probability Theory
Mixed generalized Dynkin game and stochastic control in a Markovian framework
We introduce a mixed {\em generalized} Dynkin game/stochastic control with
-expectation in a Markovian framework. We study both the case when
the terminal reward function is supposed to be Borelian only and when it is
continuous. We first establish a weak dynamic programming principle by using
some refined results recently provided in \cite{DQS} and some properties of
doubly reflected BSDEs with jumps (DRBSDEs). We then show a stronger dynamic
programming principle in the continuous case, which cannot be derived from the
weak one. In particular, we have to prove that the value function of the
problem is continuous with respect to time , which requires some technical
tools of stochastic analysis and some new results on DRBSDEs. We finally study
the links between our mixed problem and generalized Hamilton Jacobi Bellman
variational inequalities in both cases
Generalized Dynkin Games and Doubly Reflected BSDEs with Jumps
We introduce a generalized Dynkin game problem with non linear conditional
expectation induced by a Backward Stochastic Differential Equation
(BSDE) with jumps. Let be two RCLL adapted processes with . The criterium is given by \begin{equation*}
{\cal J}_{\tau, \sigma}= {\cal E}_{0, \tau \wedge \sigma }
\left(\xi_{\tau}\textbf{1}_{\{ \tau \leq
\sigma\}}+\zeta_{\sigma}\textbf{1}_{\{\sigma<\tau\}}\right)
\end{equation*} where and are stopping times valued in
. Under Mokobodski's condition, we establish the existence of a value
function for this game, i.e. . This value can be
characterized via a doubly reflected BSDE. Using this characterization, we
provide some new results on these equations, such as comparison theorems and a
priori estimates. When and are left upper semicontinuous along
stopping times, we prove the existence of a saddle point. We also study a
generalized mixed game problem when the players have two actions: continuous
control and stopping. We then address the generalized Dynkin game in a
Markovian framework and its links with parabolic partial integro-differential
variational inequalities with two obstacles
A Weak Dynamic Programming Principle for Combined Optimal Stopping and Stochastic Control with - expectations
We study a combined optimal control/stopping problem under a nonlinear
expectation induced by a BSDE with jumps, in a Markovian
framework. The terminal reward function is only supposed to be Borelian. The
value function associated with this problem is generally irregular. We
first establish a {\em sub- (resp. super-) optimality principle of dynamic
programming} involving its {\em upper- (resp. lower-) semicontinuous envelope}
(resp. ). This result, called {\em weak} dynamic programming
principle (DPP), extends that obtained in \cite{BT} in the case of a classical
expectation to the case of an -expectation and Borelian terminal
reward function. Using this {\em weak} DPP, we then prove that (resp.
) is a {\em viscosity sub- (resp. super-) solution} of a nonlinear
Hamilton-Jacobi-Bellman variational inequality
Optimal multiple stopping time problem
We study the optimal multiple stopping time problem defined for each stopping
time by . The key point is the construction
of a new reward such that the value function also satisfies
.
This new reward is not a right-continuous adapted process as in the
classical case, but a family of random variables. For such a reward, we prove a
new existence result for optimal stopping times under weaker assumptions than
in the classical case. This result is used to prove the existence of optimal
multiple stopping times for by a constructive method. Moreover, under
strong regularity assumptions on , we show that the new reward can
be aggregated by a progressive process. This leads to new applications,
particularly in finance (applications to American options with multiple
exercise times).Comment: Published in at http://dx.doi.org/10.1214/10-AAP727 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Dynkin games in a general framework
We revisit the Dynkin game problem in a general framework, improve classical
results and relax some assumptions. The criterion is expressed in terms of
families of random variables indexed by stopping times. We construct two
nonnegative supermartingales families and whose finitness is
equivalent to the Mokobodski's condition. Under some weak right-regularity
assumption, the game is shown to be fair and is shown to be the common
value function. Existence of saddle points is derived under some weak
additional assumptions. All the results are written in terms of random
variables and are proven by using only classical results of probability theory.Comment: stochastics, Published online: 10 Apr 201
Optimal double stopping time
We consider the optimal double stopping time problem defined for each
stopping time by v(S)=\esssup\{E[\psi(\tau_1, \tau_2) | \F_S], \tau_1,
\tau_2 \geq S \}. Following the optimal one stopping time problem, we study
the existence of optimal stopping times and give a method to compute them. The
key point is the construction of a {\em new reward} such that the value
function satisfies v(S)=\esssup\{E[\phi(\tau) | \F_S], \tau \geq S \}.
Finally, we give an example of an american option with double exercise time.Comment: 6 page
Erratum: Optimal stopping time problem in a general framework
The proof of the second point of Proposition B11 given in the Appendix of Kobylanski and Quenez (2012) ([1]) is only valid in the case where the reward process is right-continuous. In this Erratum, we give the proof in the case where the reward is only right-upper-semicontinuous
- …