128 research outputs found
Dynamic robust duality in utility maximization
A celebrated financial application of convex duality theory gives an explicit
relation between the following two quantities:
(i) The optimal terminal wealth of the problem
to maximize the expected -utility of the terminal wealth
generated by admissible portfolios in a market
with the risky asset price process modeled as a semimartingale;
(ii) The optimal scenario of the dual problem to minimize
the expected -value of over a family of equivalent local
martingale measures , where is the convex conjugate function of the
concave function .
In this paper we consider markets modeled by It\^o-L\'evy processes. In the
first part we use the maximum principle in stochastic control theory to extend
the above relation to a \emph{dynamic} relation, valid for all .
We prove in particular that the optimal adjoint process for the primal problem
coincides with the optimal density process, and that the optimal adjoint
process for the dual problem coincides with the optimal wealth process, . In the terminal time case we recover the classical duality
connection above. We get moreover an explicit relation between the optimal
portfolio and the optimal measure . We also obtain that the
existence of an optimal scenario is equivalent to the replicability of a
related -claim.
In the second part we present robust (model uncertainty) versions of the
optimization problems in (i) and (ii), and we prove a similar dynamic relation
between them. In particular, we show how to get from the solution of one of the
problems to the other. We illustrate the results with explicit examples
A stochastic HJB equation for optimal control of forward-backward SDEs
We study optimal stochastic control problems of general coupled systems of
forward-backward stochastic differential equations with jumps. By means of the
It\^o-Ventzell formula the system is transformed to a controlled backward
stochastic partial differential equation (BSPDE) with jumps. Using a comparison
principle for such BSPDEs we obtain a general stochastic Hamilton-Jacobi-
Bellman (HJB) equation for such control problems. In the classical Markovian
case with optimal control of jump diffusions, the equation reduces to the
classical HJB equation.
The results are applied to study risk minimization in financial markets
Singular mean-field control games with applications to optimal harvesting and investment problems
This paper studies singular mean field control problems and singular mean
field stochastic differential games. Both sufficient and necessary conditions
for the optimal controls and for the Nash equilibrium are obtained. Under some
assumptions the optimality conditions for singular mean-field control are
reduced to a reflected Skorohod problem, whose solution is proved to exist
uniquely. Applications are given to optimal harvesting of stochastic mean-field
systems, optimal irreversible investments under uncertainty and to mean-field
singular investment games. In particular, a simple singular mean-field
investment game is studied where the Nash equilibrium exists but is not unique
A comparison theorem for backward SPDEs with jumps
In this paper we obtain a comparison theorem for backward stochastic partial
differential equation (SPDEs) with jumps. We apply it to introduce
space-dependent convex risk measures as a model for risk in large systems of
interacting components
Generalized Dynkin Games and Doubly Reflected BSDEs with Jumps
We introduce a generalized Dynkin game problem with non linear conditional
expectation induced by a Backward Stochastic Differential Equation
(BSDE) with jumps. Let be two RCLL adapted processes with . The criterium is given by \begin{equation*}
{\cal J}_{\tau, \sigma}= {\cal E}_{0, \tau \wedge \sigma }
\left(\xi_{\tau}\textbf{1}_{\{ \tau \leq
\sigma\}}+\zeta_{\sigma}\textbf{1}_{\{\sigma<\tau\}}\right)
\end{equation*} where and are stopping times valued in
. Under Mokobodski's condition, we establish the existence of a value
function for this game, i.e. . This value can be
characterized via a doubly reflected BSDE. Using this characterization, we
provide some new results on these equations, such as comparison theorems and a
priori estimates. When and are left upper semicontinuous along
stopping times, we prove the existence of a saddle point. We also study a
generalized mixed game problem when the players have two actions: continuous
control and stopping. We then address the generalized Dynkin game in a
Markovian framework and its links with parabolic partial integro-differential
variational inequalities with two obstacles
Mixed generalized Dynkin game and stochastic control in a Markovian framework
We introduce a mixed {\em generalized} Dynkin game/stochastic control with
-expectation in a Markovian framework. We study both the case when
the terminal reward function is supposed to be Borelian only and when it is
continuous. We first establish a weak dynamic programming principle by using
some refined results recently provided in \cite{DQS} and some properties of
doubly reflected BSDEs with jumps (DRBSDEs). We then show a stronger dynamic
programming principle in the continuous case, which cannot be derived from the
weak one. In particular, we have to prove that the value function of the
problem is continuous with respect to time , which requires some technical
tools of stochastic analysis and some new results on DRBSDEs. We finally study
the links between our mixed problem and generalized Hamilton Jacobi Bellman
variational inequalities in both cases
A Weak Dynamic Programming Principle for Combined Optimal Stopping and Stochastic Control with - expectations
We study a combined optimal control/stopping problem under a nonlinear
expectation induced by a BSDE with jumps, in a Markovian
framework. The terminal reward function is only supposed to be Borelian. The
value function associated with this problem is generally irregular. We
first establish a {\em sub- (resp. super-) optimality principle of dynamic
programming} involving its {\em upper- (resp. lower-) semicontinuous envelope}
(resp. ). This result, called {\em weak} dynamic programming
principle (DPP), extends that obtained in \cite{BT} in the case of a classical
expectation to the case of an -expectation and Borelian terminal
reward function. Using this {\em weak} DPP, we then prove that (resp.
) is a {\em viscosity sub- (resp. super-) solution} of a nonlinear
Hamilton-Jacobi-Bellman variational inequality
- …