22,285 research outputs found

    Stochastic optimal controls with delay

    Get PDF
    This thesis investigates stochastic optimal control problems with discrete delay and those with both discrete and exponential moving average delays, using the stochastic maximum principle, together with the methods of conjugate duality and dynamic programming. To obtain the stochastic maximum principle, we first extend the conjugate duality method presented in [2, 44] to study a stochastic convex (primal) problem with discrete delay. An expression for the corresponding dual problem, as well as the necessary and sufficient conditions for optimality of both problems, are derived. The novelty of our work is that, after reformulating a stochastic optimal control problem with delay as a particular convex problem, the conditions for optimality of convex problems lead to the stochastic maximum principle for the control problem. In particular, if the control problem involves both the types of delay and is jump-free, the stochastic maximum principle obtained in this thesis improves those obtained in [29, 30]. Adapting the technique used in [19, Chapter 3] to the stochastic context, we consider a class of stochastic optimal control problems with delay where the value functions are separable, i.e. can be expressed in terms of so-called auxiliary functions. The technique enables us to obtain second-order partial differential equations, satisfied by the auxiliary functions, which we shall call auxiliary HJB equations. Also, the corresponding verification theorem is obtained. If both the types of delay are involved, our auxiliary HJB equations generalize the HJB equations obtained in [22, 23] and our verification theorem improves the stochastic verification theorem there

    Stochastic optimal controls with delay

    Get PDF
    This thesis investigates stochastic optimal control problems with discrete delay and those with both discrete and exponential moving average delays, using the stochastic maximum principle, together with the methods of conjugate duality and dynamic programming. To obtain the stochastic maximum principle, we first extend the conjugate duality method presented in [2, 44] to study a stochastic convex (primal) problem with discrete delay. An expression for the corresponding dual problem, as well as the necessary and sufficient conditions for optimality of both problems, are derived. The novelty of our work is that, after reformulating a stochastic optimal control problem with delay as a particular convex problem, the conditions for optimality of convex problems lead to the stochastic maximum principle for the control problem. In particular, if the control problem involves both the types of delay and is jump-free, the stochastic maximum principle obtained in this thesis improves those obtained in [29, 30]. Adapting the technique used in [19, Chapter 3] to the stochastic context, we consider a class of stochastic optimal control problems with delay where the value functions are separable, i.e. can be expressed in terms of so-called auxiliary functions. The technique enables us to obtain second-order partial differential equations, satisfied by the auxiliary functions, which we shall call auxiliary HJB equations. Also, the corresponding verification theorem is obtained. If both the types of delay are involved, our auxiliary HJB equations generalize the HJB equations obtained in [22, 23] and our verification theorem improves the stochastic verification theorem there

    Cauchy-Lipschitz theory for fractional multi-order dynamics -- State-transition matrices, Duhamel formulas and duality theorems

    Full text link
    The aim of the present paper is to contribute to the development of the study of Cauchy problems involving Riemann-Liouville and Caputo fractional derivatives. Firstly existence-uniqueness results for solutions of non-linear Cauchy problems with vector fractional multi-order are addressed. A qualitative result about the behavior of local but non-global solutions is also provided. Finally the major aim of this paper is to introduce notions of fractional state-transition matrices and to derive fractional versions of the classical Duhamel formula. We also prove duality theorems relying left state-transition matrices with right state-transition matrices

    Dynamic robust duality in utility maximization

    Full text link
    A celebrated financial application of convex duality theory gives an explicit relation between the following two quantities: (i) The optimal terminal wealth X∗(T):=Xφ∗(T)X^*(T) : = X_{\varphi^*}(T) of the problem to maximize the expected UU-utility of the terminal wealth Xφ(T)X_{\varphi}(T) generated by admissible portfolios φ(t),0≤t≤T\varphi(t), 0 \leq t \leq T in a market with the risky asset price process modeled as a semimartingale; (ii) The optimal scenario dQ∗dP\frac{dQ^*}{dP} of the dual problem to minimize the expected VV-value of dQdP\frac{dQ}{dP} over a family of equivalent local martingale measures QQ, where VV is the convex conjugate function of the concave function UU. In this paper we consider markets modeled by It\^o-L\'evy processes. In the first part we use the maximum principle in stochastic control theory to extend the above relation to a \emph{dynamic} relation, valid for all t∈[0,T]t \in [0,T]. We prove in particular that the optimal adjoint process for the primal problem coincides with the optimal density process, and that the optimal adjoint process for the dual problem coincides with the optimal wealth process, 0≤t≤T0 \leq t \leq T. In the terminal time case t=Tt=T we recover the classical duality connection above. We get moreover an explicit relation between the optimal portfolio φ∗\varphi^* and the optimal measure Q∗Q^*. We also obtain that the existence of an optimal scenario is equivalent to the replicability of a related TT-claim. In the second part we present robust (model uncertainty) versions of the optimization problems in (i) and (ii), and we prove a similar dynamic relation between them. In particular, we show how to get from the solution of one of the problems to the other. We illustrate the results with explicit examples

    Infinite Horizon and Ergodic Optimal Quadratic Control for an Affine Equation with Stochastic Coefficients

    Full text link
    We study quadratic optimal stochastic control problems with control dependent noise state equation perturbed by an affine term and with stochastic coefficients. Both infinite horizon case and ergodic case are treated. To this purpose we introduce a Backward Stochastic Riccati Equation and a dual backward stochastic equation, both considered in the whole time line. Besides some stabilizability conditions we prove existence of a solution for the two previous equations defined as limit of suitable finite horizon approximating problems. This allows to perform the synthesis of the optimal control

    Injection-suction control for Navier-Stokes equations with slippage

    Get PDF
    We consider a velocity tracking problem for the Navier-Stokes equations in a 2D-bounded domain. The control acts on the boundary through a injection-suction device and the flow is allowed to slip against the surface wall. We study the well-posedness of the state equations, linearized state equations and adjoint equations. In addition, we show the existence of an optimal solution and establish the first order optimality condition.Comment: 23 page
    • …
    corecore