25,531 research outputs found

    Maximum Principle for General Controlled Systems Driven by Fractional Brownian Motions

    Get PDF
    We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motion and the corresponding underlying standard Brownian motion. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need to develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motion while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.Comment: 44 page

    Maximum principle for optimal control of stochastic evolution equations with recursive utilities

    Full text link
    We consider the optimal control problem of stochastic evolution equations in a Hilbert space under a recursive utility, which is described as the solution of a backward stochastic differential equation (BSDE). A very general maximum principle is given for the optimal control, allowing the control domain not to be convex and the generator of the BSDE to vary with the second unknown variable zz. The associated second-order adjoint process is characterized as a unique solution of a conditionally expected operator-valued backward stochastic integral equation

    Randomized dynamic programming principle and Feynman-Kac representation for optimal control of McKean-Vlasov dynamics

    Full text link
    We analyze a stochastic optimal control problem, where the state process follows a McKean-Vlasov dynamics and the diffusion coefficient can be degenerate. We prove that its value function V admits a nonlinear Feynman-Kac representation in terms of a class of forward-backward stochastic differential equations, with an autonomous forward process. We exploit this probabilistic representation to rigorously prove the dynamic programming principle (DPP) for V. The Feynman-Kac representation we obtain has an important role beyond its intermediary role in obtaining our main result: in fact it would be useful in developing probabilistic numerical schemes for V. The DPP is important in obtaining a characterization of the value function as a solution of a non-linear partial differential equation (the so-called Hamilton-Jacobi-Belman equation), in this case on the Wasserstein space of measures. We should note that the usual way of solving these equations is through the Pontryagin maximum principle, which requires some convexity assumptions. There were attempts in using the dynamic programming approach before, but these works assumed a priori that the controls were of Markovian feedback type, which helps write the problem only in terms of the distribution of the state process (and the control problem becomes a deterministic problem). In this paper, we will consider open-loop controls and derive the dynamic programming principle in this most general case. In order to obtain the Feynman-Kac representation and the randomized dynamic programming principle, we implement the so-called randomization method, which consists in formulating a new McKean-Vlasov control problem, expressed in weak form taking the supremum over a family of equivalent probability measures. One of the main results of the paper is the proof that this latter control problem has the same value function V of the original control problem.Comment: 41 pages, to appear in Transactions of the American Mathematical Societ

    Maximum Principle for Forward-Backward Doubly Stochastic Control Systems and Applications

    Get PDF
    The maximum principle for optimal control problems of fully coupled forward-backward doubly stochastic differential equations (FBDSDEs in short) in the global form is obtained, under the assumptions that the diffusion coefficients do not contain the control variable, but the control domain need not to be convex. We apply our stochastic maximum principle (SMP in short) to investigate the optimal control problems of a class of stochastic partial differential equations (SPDEs in short). And as an example of the SMP, we solve a kind of forward-backward doubly stochastic linear quadratic optimal control problems as well. In the last section, we use the solution of FBDSDEs to get the explicit form of the optimal control for linear quadratic stochastic optimal control problem and open-loop Nash equilibrium point for nonzero sum differential games problem
    • …
    corecore