108,608 research outputs found

    Conjugate duality in stochastic controls with delay

    Get PDF
    This paper uses the method of conjugate duality to investigate a class of stochastic optimal control problems where state systems are described by stochastic differential equations with delay. For this, we first analyse a stochastic convex problem with delay and derive the expression for the corresponding dual problem. This enables us to obtain the relationship between the optimalities for the two problems. Then, by linking stochastic optimal control problems with delay with a particular type of stochastic convex problem, the result for the latter leads to sufficient maximum principles for the former

    Discretisation of continuous-time stochastic optimal control problems with delay

    Get PDF
    In the present work, we study discretisation schemes for continuous-time stochastic optimal control problems with time delay. The dynamics of the control problems to be approximated are described by controlled stochastic delay (or functional) differential equations. The value functions associated with such control problems are defined on an infinite-dimensional function space. The discretisation schemes studied are obtained by replacing the original control problem by a sequence of approximating discrete-time Markovian control problems with finite or finite-dimensional state space. Such a scheme is convergent if the value functions associated with the approximating control problems converge to the value function of the original problem. Following a general method for the discretisation of continuous-time control problems, sufficient conditions for the convergence of discretisation schemes for a class of stochastic optimal control problems with delay are derived. The general method itself is cast in a formal framework. A semi-discretisation scheme for a second class of stochastic optimal control problems with delay is proposed. Under standard assumptions, convergence of the scheme as well as uniform upper bounds on the discretisation error are obtained. The question of how to numerically solve the resulting discrete-time finite-dimensional control problems is also addressed

    Discretisation of Stochastic Control Problems for Continuous Time Dynamics with Delay

    Get PDF
    As a main step in the numerical solution of control problems in continuous time, the controlled process is approximated by sequences of controlled Markov chains, thus discretizing time and space. A new feature in this context is to allow for delay in the dynamics. The existence of an optimal strategy with respect to the cost functional can be guaranteed in the class of relaxed controls. Weak convergence of the approximating extended Markov chains to the original process together with convergence of the associated optimal strategies is established.Markov, Markov chain, time dynamics, stochastic control problem

    Stochastic Optimal Control with Delay in the Control: solution through partial smoothing

    Full text link
    Stochastic optimal control problems governed by delay equations with delay in the control are usually more difficult to study than the the ones when the delay appears only in the state. This is particularly true when we look at the associated Hamilton-Jacobi-Bellman (HJB) equation. Indeed, even in the simplified setting (introduced first by Vinter and Kwong for the deterministic case) the HJB equation is an infinite dimensional second order semilinear Partial Differential Equation (PDE) that does not satisfy the so-called "structure condition" which substantially means that "the noise enters the system with the control." The absence of such condition, together with the lack of smoothing properties which is a common feature of problems with delay, prevents the use of the known techniques (based on Backward Stochastic Differential Equations (BSDEs) or on the smoothing properties of the linear part) to prove the existence of regular solutions of this HJB equation and so no results on this direction have been proved till now. In this paper we provide a result on existence of regular solutions of such kind of HJB equations and we use it to solve completely the corresponding control problem finding optimal feedback controls also in the more difficult case of pointwise delay. The main tool used is a partial smoothing property that we prove for the transition semigroup associated to the uncontrolled problem. Such results holds for a specific class of equations and data which arises naturally in many applied problems

    On the maximum principle for optimal control problems of stochastic Volterra integral equations with delay

    Full text link
    In this paper, we prove both necessary and sufficient maximum principles for infinite horizon discounted control problems of stochastic Volterra integral equations with finite delay and a convex control domain. The corresponding adjoint equation is a novel class of infinite horizon anticipated backward stochastic Volterra integral equations. Our results can be applied to discounted control problems of stochastic delay differential equations and fractional stochastic delay differential equations. As an example, we consider a stochastic linear-quadratic regulator problem for a delayed fractional system. Based on the maximum principle, we prove the existence and uniqueness of the optimal control for this concrete example and obtain a new type of explicit Gaussian state-feedback representation formula for the optimal control.Comment: 28 page
    corecore