174,962 research outputs found

    First and second order necessary conditions for stochastic optimal control problems

    Get PDF
    International audienceIn this work we consider a stochastic optimal control problem with either convex control constraints or finitely many equality and inequality constraints over the final state. Using the variational approach, we are able to obtain first and second order expansions for the state and cost function, around a local minimum. This fact allows us to prove general first order necessary condition and, under a geometrical assumption over the constraint set, second order necessary conditions are also established. We end by giving second order optimality conditions for problems with constraints on expectations of the final state

    First and Second Order Necessary Optimality Conditions for Controlled Stochastic Evolution Equations with Control and State Constraints

    Get PDF
    The purpose of this paper is to establish first and second order necessary optimality conditions for optimal control problems of stochastic evolution equations with control and state constraints. The control acts both in the drift and diffusion terms and the control region is a nonempty closed subset of a separable Hilbert space. We employ some classical set-valued analysis tools and theories of the transposition solution of vector-valued backward stochastic evolution equations and the relaxed-transposition solution of operator-valued backward stochastic evolution equations to derive these optimality conditions. The correction part of the second order adjoint equation, which does not appear in the first order optimality condition, plays a fundamental role in the second order optimality condition

    Pointwise Second Order Necessary Conditions for Stochastic Optimal Control with Jump Diffusion

    Get PDF
    Stochastic maximum principle is one of the important major approaches to discuss stochastic control problems. A lot of work has been done on this kind of problem, see, for example, Bensoussan [3], Cadenillas and Karatzas [10], Kushner [31], Peng [41]. Recently, another kind of stochastic maximum principle, pointwise second order necessary conditions for stochastic optimal controls has been established and studied for its applications in the financial market by Zhang and Zhang [58] when the control region is assumed to be convex. In Zhang and Zhang [59], the authors extended the pointwise second order necessary conditions for stochastic optimal controls in the general cases when the control region is allowed to be non convex. Second order necessary conditions for optimal control with recursive utilities was proved by Dong and Meng [13]. In this thesis, we generalizes the work of Zhang and Zhang [58] for jump diffusions, we establish a second order necessary conditions where the controlled system is described by a stochastic differential systems driven by Poisson random measure and an independent Brownian motion. The control domain is assumed to be convex. Pointwise second order maximum principle for controlled jump diffusion in terms of the martingale with respect to the time variable is proved. The proof of the main result is based on variational approach using the stochastic calculus of jump diffusions and some estimates on the state processes. Our stochastic control problem provides also an interesting models in many applications such as economics and mathematical finance

    First and second order necessary optimality conditions for controlled stochastic evolution equations with control and state constraints

    Get PDF
    International audienceThe purpose of this paper is to establish first and second order necessary optimality conditions for optimal control problems of stochastic evolution equations with control and state constraints. The control acts both in the drift and diffusion terms and the control region is a nonempty closed subset of a separable Hilbert space. We employ some classical set-valued analysis tools and theories of the transposition solution of vector-valued backward stochastic evolution equations and the relaxed-transposition solution of operator-valued backward stochas-tic evolution equations to derive these optimality conditions. The correction part of the second order adjoint equation, which does not appear in the first order optimality condition, plays a fundamental role in the second order optimality condition

    Stochastic optimal controls with delay

    Get PDF
    This thesis investigates stochastic optimal control problems with discrete delay and those with both discrete and exponential moving average delays, using the stochastic maximum principle, together with the methods of conjugate duality and dynamic programming. To obtain the stochastic maximum principle, we first extend the conjugate duality method presented in [2, 44] to study a stochastic convex (primal) problem with discrete delay. An expression for the corresponding dual problem, as well as the necessary and sufficient conditions for optimality of both problems, are derived. The novelty of our work is that, after reformulating a stochastic optimal control problem with delay as a particular convex problem, the conditions for optimality of convex problems lead to the stochastic maximum principle for the control problem. In particular, if the control problem involves both the types of delay and is jump-free, the stochastic maximum principle obtained in this thesis improves those obtained in [29, 30]. Adapting the technique used in [19, Chapter 3] to the stochastic context, we consider a class of stochastic optimal control problems with delay where the value functions are separable, i.e. can be expressed in terms of so-called auxiliary functions. The technique enables us to obtain second-order partial differential equations, satisfied by the auxiliary functions, which we shall call auxiliary HJB equations. Also, the corresponding verification theorem is obtained. If both the types of delay are involved, our auxiliary HJB equations generalize the HJB equations obtained in [22, 23] and our verification theorem improves the stochastic verification theorem there

    Stochastic optimal controls with delay

    Get PDF
    This thesis investigates stochastic optimal control problems with discrete delay and those with both discrete and exponential moving average delays, using the stochastic maximum principle, together with the methods of conjugate duality and dynamic programming. To obtain the stochastic maximum principle, we first extend the conjugate duality method presented in [2, 44] to study a stochastic convex (primal) problem with discrete delay. An expression for the corresponding dual problem, as well as the necessary and sufficient conditions for optimality of both problems, are derived. The novelty of our work is that, after reformulating a stochastic optimal control problem with delay as a particular convex problem, the conditions for optimality of convex problems lead to the stochastic maximum principle for the control problem. In particular, if the control problem involves both the types of delay and is jump-free, the stochastic maximum principle obtained in this thesis improves those obtained in [29, 30]. Adapting the technique used in [19, Chapter 3] to the stochastic context, we consider a class of stochastic optimal control problems with delay where the value functions are separable, i.e. can be expressed in terms of so-called auxiliary functions. The technique enables us to obtain second-order partial differential equations, satisfied by the auxiliary functions, which we shall call auxiliary HJB equations. Also, the corresponding verification theorem is obtained. If both the types of delay are involved, our auxiliary HJB equations generalize the HJB equations obtained in [22, 23] and our verification theorem improves the stochastic verification theorem there
    corecore