24 research outputs found
Optimal stopping in a general framework
We study the optimal stopping time problem , for any stopping time , where the reward
is given by a family \emph{of non
negative random variables} indexed by stopping times. We solve the problem
under weak assumptions in terms of integrability and regularity of the reward
family. More precisely, we only suppose and upper semicontinuous along stopping
times in expectation. We show the existence of an optimal stopping time and
obtain a characterization of the minimal and the maximal optimal stopping
times. We also provide some local properties of the value function family. All
the results are written in terms of families of random variables and are proven
by only using classical results of the Probability Theory
A Piecewise Deterministic Markov Toy Model for Traffic/Maintenance and Associated Hamilton-Jacobi Integrodifferential Systems on Networks
We study optimal control problems in infinite horizon when the dynamics
belong to a specific class of piecewise deterministic Markov processes
constrained to star-shaped networks (inspired by traffic models). We adapt the
results in [H. M. Soner. Optimal control with state-space constraint. II. SIAM
J. Control Optim., 24(6):1110.1122, 1986] to prove the regularity of the value
function and the dynamic programming principle. Extending the networks and
Krylov's ''shaking the coefficients'' method, we prove that the value function
can be seen as the solution to a linearized optimization problem set on a
convenient set of probability measures. The approach relies entirely on
viscosity arguments. As a by-product, the dual formulation guarantees that the
value function is the pointwise supremum over regular subsolutions of the
associated Hamilton-Jacobi integrodifferential system. This ensures that the
value function satisfies Perron's preconization for the (unique) candidate to
viscosity solution. Finally, we prove that the same kind of linearization can
be obtained by combining linearization for classical (unconstrained) problems
and cost penalization. The latter method works for very general near-viable
systems (possibly without further controllability) and discontinuous costs.Comment: accepted to Applied Mathematics and Optimization (01/10/2015
Optimal multiple stopping time problem
We study the optimal multiple stopping time problem defined for each stopping
time by . The key point is the construction
of a new reward such that the value function also satisfies
.
This new reward is not a right-continuous adapted process as in the
classical case, but a family of random variables. For such a reward, we prove a
new existence result for optimal stopping times under weaker assumptions than
in the classical case. This result is used to prove the existence of optimal
multiple stopping times for by a constructive method. Moreover, under
strong regularity assumptions on , we show that the new reward can
be aggregated by a progressive process. This leads to new applications,
particularly in finance (applications to American options with multiple
exercise times).Comment: Published in at http://dx.doi.org/10.1214/10-AAP727 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Optimal double stopping time
We consider the optimal double stopping time problem defined for each
stopping time by v(S)=\esssup\{E[\psi(\tau_1, \tau_2) | \F_S], \tau_1,
\tau_2 \geq S \}. Following the optimal one stopping time problem, we study
the existence of optimal stopping times and give a method to compute them. The
key point is the construction of a {\em new reward} such that the value
function satisfies v(S)=\esssup\{E[\phi(\tau) | \F_S], \tau \geq S \}.
Finally, we give an example of an american option with double exercise time.Comment: 6 page
Dynkin games in a general framework
We revisit the Dynkin game problem in a general framework, improve classical
results and relax some assumptions. The criterion is expressed in terms of
families of random variables indexed by stopping times. We construct two
nonnegative supermartingales families and whose finitness is
equivalent to the Mokobodski's condition. Under some weak right-regularity
assumption, the game is shown to be fair and is shown to be the common
value function. Existence of saddle points is derived under some weak
additional assumptions. All the results are written in terms of random
variables and are proven by using only classical results of probability theory.Comment: stochastics, Published online: 10 Apr 201
Erratum: Optimal stopping time problem in a general framework
The proof of the second point of Proposition B11 given in the Appendix of Kobylanski and Quenez (2012) ([1]) is only valid in the case where the reward process is right-continuous. In this Erratum, we give the proof in the case where the reward is only right-upper-semicontinuous
Optimal multiple stopping time problem
We study the optimal multiple stopping time problem defined for each stopping time by \displaystyle{ v(S)=\esssup_ {\tau_1,\cdots,\tau_d \geq S } E[\psi( \tau_1,\cdots,\tau_d)\, |\,\F_S]\,}.\\ The key point is the construction of a {\em new reward} such that the value function satisfies also v(S)=\esssup_ {\theta \geq S } \,E[\phi(\theta)\, |\,\F_S]\,. This new reward is not a right continuous adapted process as in the classical case but a family of random variables. For such a reward, we prove a new existence result of optimal stopping times under weaker assumptions than in the classical case. This result is used to prove the existence of optimal multiple stopping times for by a constructive method. Moreover, under strong regularity assumptions on , we show that the new reward can be aggregated by a progressive process. This leads to different applications in particular in finance for American options with multiple exercise times
Dynkin games in a general framework
We revisit the Dynkin game problem in a general framework and relax some assumptions. The payoffs and the criterion are expressed in terms of families of random variables indexed by stopping times. We construct two nonnegative supermartingales families and whose finitness is equivalent to the Mokobodski's condition. Under some weak right-regularity assumption on the payoff families, the game is shown to be fair and is shown to be the common value function. Existence of saddle points is derived under some weak additional assumptions. All the results are written in terms of random variables and are proven by using only classical results of probability theory
The Logarithmic Sobolev Constant of The Lamplighter
We give estimates on the logarithmic Sobolev constant of some finite
lamplighter graphs in terms of the spectral gap of the underlying base. Also,
we give examples of application
Large Deviations Principle by viscosity solutions: The case of diffusions with oblique Lipschitz reflections
International audienceWe establish a Large Deviations Principle for diffusions with Lipschitz continuous oblique reflections on regular do- mains. The rate functional is given as the value function of a control problem and is proved to be good. The proof is based on a viscosity solution approach. The idea consists in interpreting the probabilities as the solutions to some PDEs, make the logarithmic transform, pass to the limit, and then identify the action functional as the solution of the limiting equation