22,285 research outputs found
Stochastic optimal controls with delay
This thesis investigates stochastic optimal control problems with discrete delay and those with both discrete and exponential moving average delays, using the stochastic maximum principle, together with the methods of conjugate duality and dynamic programming.
To obtain the stochastic maximum principle, we first extend the conjugate duality method presented in [2, 44] to study a stochastic convex (primal) problem with discrete delay. An expression for the corresponding dual problem, as well as the necessary and sufficient conditions for optimality of both problems, are derived. The novelty of our work is that, after reformulating a stochastic optimal control problem with delay as a particular convex problem, the conditions for optimality of convex problems lead to the stochastic maximum principle for the control problem. In particular, if the control problem involves both the types of delay and is jump-free, the stochastic maximum principle obtained in this thesis improves those obtained in [29, 30].
Adapting the technique used in [19, Chapter 3] to the stochastic context, we consider a class of stochastic optimal control problems with delay where the value functions are separable, i.e. can be expressed in terms of so-called auxiliary functions. The technique enables us to obtain second-order partial differential equations, satisfied by the auxiliary functions, which we shall call auxiliary HJB equations. Also, the corresponding verification theorem is obtained. If both the types of delay are involved, our auxiliary HJB equations generalize the HJB equations obtained in [22, 23] and our verification theorem improves the stochastic verification theorem there
Stochastic optimal controls with delay
This thesis investigates stochastic optimal control problems with discrete delay and those with both discrete and exponential moving average delays, using the stochastic maximum principle, together with the methods of conjugate duality and dynamic programming.
To obtain the stochastic maximum principle, we first extend the conjugate duality method presented in [2, 44] to study a stochastic convex (primal) problem with discrete delay. An expression for the corresponding dual problem, as well as the necessary and sufficient conditions for optimality of both problems, are derived. The novelty of our work is that, after reformulating a stochastic optimal control problem with delay as a particular convex problem, the conditions for optimality of convex problems lead to the stochastic maximum principle for the control problem. In particular, if the control problem involves both the types of delay and is jump-free, the stochastic maximum principle obtained in this thesis improves those obtained in [29, 30].
Adapting the technique used in [19, Chapter 3] to the stochastic context, we consider a class of stochastic optimal control problems with delay where the value functions are separable, i.e. can be expressed in terms of so-called auxiliary functions. The technique enables us to obtain second-order partial differential equations, satisfied by the auxiliary functions, which we shall call auxiliary HJB equations. Also, the corresponding verification theorem is obtained. If both the types of delay are involved, our auxiliary HJB equations generalize the HJB equations obtained in [22, 23] and our verification theorem improves the stochastic verification theorem there
Cauchy-Lipschitz theory for fractional multi-order dynamics -- State-transition matrices, Duhamel formulas and duality theorems
The aim of the present paper is to contribute to the development of the study
of Cauchy problems involving Riemann-Liouville and Caputo fractional
derivatives. Firstly existence-uniqueness results for solutions of non-linear
Cauchy problems with vector fractional multi-order are addressed. A qualitative
result about the behavior of local but non-global solutions is also provided.
Finally the major aim of this paper is to introduce notions of fractional
state-transition matrices and to derive fractional versions of the classical
Duhamel formula. We also prove duality theorems relying left state-transition
matrices with right state-transition matrices
Dynamic robust duality in utility maximization
A celebrated financial application of convex duality theory gives an explicit
relation between the following two quantities:
(i) The optimal terminal wealth of the problem
to maximize the expected -utility of the terminal wealth
generated by admissible portfolios in a market
with the risky asset price process modeled as a semimartingale;
(ii) The optimal scenario of the dual problem to minimize
the expected -value of over a family of equivalent local
martingale measures , where is the convex conjugate function of the
concave function .
In this paper we consider markets modeled by It\^o-L\'evy processes. In the
first part we use the maximum principle in stochastic control theory to extend
the above relation to a \emph{dynamic} relation, valid for all .
We prove in particular that the optimal adjoint process for the primal problem
coincides with the optimal density process, and that the optimal adjoint
process for the dual problem coincides with the optimal wealth process, . In the terminal time case we recover the classical duality
connection above. We get moreover an explicit relation between the optimal
portfolio and the optimal measure . We also obtain that the
existence of an optimal scenario is equivalent to the replicability of a
related -claim.
In the second part we present robust (model uncertainty) versions of the
optimization problems in (i) and (ii), and we prove a similar dynamic relation
between them. In particular, we show how to get from the solution of one of the
problems to the other. We illustrate the results with explicit examples
Infinite Horizon and Ergodic Optimal Quadratic Control for an Affine Equation with Stochastic Coefficients
We study quadratic optimal stochastic control problems with control dependent
noise state equation perturbed by an affine term and with stochastic
coefficients. Both infinite horizon case and ergodic case are treated. To this
purpose we introduce a Backward Stochastic Riccati Equation and a dual backward
stochastic equation, both considered in the whole time line. Besides some
stabilizability conditions we prove existence of a solution for the two
previous equations defined as limit of suitable finite horizon approximating
problems. This allows to perform the synthesis of the optimal control
Injection-suction control for Navier-Stokes equations with slippage
We consider a velocity tracking problem for the Navier-Stokes equations in a
2D-bounded domain. The control acts on the boundary through a injection-suction
device and the flow is allowed to slip against the surface wall. We study the
well-posedness of the state equations, linearized state equations and adjoint
equations. In addition, we show the existence of an optimal solution and
establish the first order optimality condition.Comment: 23 page
- …