8,197 research outputs found

    Optimal investment models with vintage capital: Dynamic Programming approach

    Get PDF
    The Dynamic Programming approach for a family of optimal investment models with vintage capital is here developed. The problem falls into the class of infinite horizon optimal control problems of PDE's with age structure that have been studied in various papers (see e.g. [11, 12], [30, 32]) either in cases when explicit solutions can be found or using Maximum Principle techniques. The problem is rephrased into an infinite dimensional setting, it is proven that the value function is the unique regular solution of the associated stationary Hamilton-Jacobi-Bellman equation, and existence and uniqueness of optimal feedback controls is derived. It is then shown that the optimal path is the solution to the closed loop equation. Similar results were proven in the case of finite horizon in [26][27]. The case of infinite horizon is more challenging as a mathematical problem, and indeed more interesting from the point of view of optimal investment models with vintage capital, where what mainly matters is the behavior of optimal trajectories and controls in the long run. The study of infinite horizon is performed through a nontrivial limiting procedure from the corresponding finite horizon problemsOptimal investment, vintage capital, age-structured systems, optimal control, dynamic programming, Hamilton-Jacobi-Bellman equations, linear convex control, boundary control

    Necessary stochastic maximum principle for dissipative systems on infinite time horizon

    Get PDF
    We develop a necessary stochastic maximum principle for a finite-dimensional stochastic control problem in infinite horizon under a polynomial growth and joint monotonicity assumption on the coefficients. The second assumption generalizes the usual one in the sense that it is formulated as a joint condition for the drift and the diffusion term. The main difficulties concern the construction of the first and second order adjoint processes by solving backward equations on an unbounded time interval. The first adjoint process is characterized as a solution to a backward SDE, which is well-posed thanks to a duality argument. The second one can be defined via another duality relation written in terms of the Hamiltonian of the system and linearized state equation. Some known models verifying the joint monotonicity assumption are discussed as well

    Jump-Diffusion Risk-Sensitive Asset Management I: Diffusion Factor Model

    Full text link
    This paper considers a portfolio optimization problem in which asset prices are represented by SDEs driven by Brownian motion and a Poisson random measure, with drifts that are functions of an auxiliary diffusion factor process. The criterion, following earlier work by Bielecki, Pliska, Nagai and others, is risk-sensitive optimization (equivalent to maximizing the expected growth rate subject to a constraint on variance.) By using a change of measure technique introduced by Kuroda and Nagai we show that the problem reduces to solving a certain stochastic control problem in the factor process, which has no jumps. The main result of the paper is to show that the risk-sensitive jump diffusion problem can be fully characterized in terms of a parabolic Hamilton-Jacobi-Bellman PDE rather than a PIDE, and that this PDE admits a classical C^{1,2} solution.Comment: 33 page

    A maximum principle for infinite horizon delay equations

    Get PDF
    We prove a maximum principle of optimal control of stochastic delay equations on infinite horizon. We establish first and second sufficient stochastic maximum principles as well as necessary conditions for that problem. We illustrate our results by an application to the optimal consumption rate from an economic quantity

    Infinite horizon control and minimax observer design for linear DAEs

    Full text link
    In this paper we construct an infinite horizon minimax state observer for a linear stationary differential-algebraic equation (DAE) with uncertain but bounded input and noisy output. We do not assume regularity or existence of a (unique) solution for any initial state of the DAE. Our approach is based on a generalization of Kalman's duality principle. The latter allows us to transform minimax state estimation problem into a dual control problem for the adjoint DAE: the state estimate in the original problem becomes the control input for the dual problem and the cost function of the latter is, in fact, the worst-case estimation error. Using geometric control theory, we construct an optimal control in the feed-back form and represent it as an output of a stable LTI system. The latter gives the minimax state estimator. In addition, we obtain a solution of infinite-horizon linear quadratic optimal control problem for DAEs.Comment: This is an extended version of the paper which is to appear in the proceedings of the 52nd IEEE Conference on Decision and Control, Florence, Italy, December 10-13, 201
    corecore