1,066 research outputs found

    Equilibrium points for Optimal Investment with Vintage Capital

    Full text link
    The paper concerns the study of equilibrium points, namely the stationary solutions to the closed loop equation, of an infinite dimensional and infinite horizon boundary control problem for linear partial differential equations. Sufficient conditions for existence of equilibrium points in the general case are given and later applied to the economic problem of optimal investment with vintage capital. Explicit computation of equilibria for the economic problem in some relevant examples is also provided. Indeed the challenging issue here is showing that a theoretical machinery, such as optimal control in infinite dimension, may be effectively used to compute solutions explicitly and easily, and that the same computation may be straightforwardly repeated in examples yielding the same abstract structure. No stability result is instead provided: the work here contained has to be considered as a first step in the direction of studying the behavior of optimal controls and trajectories in the long run

    Equilibrium Points for Optimal Investment with Vintage Capital

    Get PDF
    The paper concerns the study of equilibrium points, namely the stationary solutions to the closed loop equation, of an infinite dimensional and infinite horizon boundary control problem for linear partial differential equations. Sufficient conditions for existence of equilibrium points in the general case are given and later applied to the economic problem of optimal investment with vintage capital. Explicit computation of equilibria for the economic problem in some relevant examples is also provided. Indeed the challenging issue here is showing that a theoretical machinery, such as optimal control in infinite dimension, may be effectively used to compute solutions explicitly and easily, and that the same computation may be straightforwardly repeated in examples yielding the same abstract structure. No stability result is instead provided: the work here contained has to be considered as a first step in the direction of studying the behavior of optimal controls and trajectories in the long run.Linear convex control, Boundary control, Hamilton–Jacobi–Bellman equations, Optimal investment problems, Vintage capital

    Maximum Principle for Linear-Convex Boundary Control Problems applied to Optimal Investment with Vintage Capital

    Full text link
    The paper concerns the study of the Pontryagin Maximum Principle for an infinite dimensional and infinite horizon boundary control problem for linear partial differential equations. The optimal control model has already been studied both in finite and infinite horizon with Dynamic Programming methods in a series of papers by the same author, or by Faggian and Gozzi. Necessary and sufficient optimality conditions for open loop controls are established. Moreover the co-state variable is shown to coincide with the spatial gradient of the value function evaluated along the trajectory of the system, creating a parallel between Maximum Principle and Dynamic Programming. The abstract model applies, as recalled in one of the first sections, to optimal investment with vintage capital

    Mild solutions of semilinear elliptic equations in Hilbert spaces

    Full text link
    This paper extends the theory of regular solutions (C1C^1 in a suitable sense) for a class of semilinear elliptic equations in Hilbert spaces. The notion of regularity is based on the concept of GG-derivative, which is introduced and discussed. A result of existence and uniqueness of solutions is stated and proved under the assumption that the transition semigroup associated to the linear part of the equation has a smoothing property, that is, it maps continuous functions into GG-differentiable ones. The validity of this smoothing assumption is fully discussed for the case of the Ornstein-Uhlenbeck transition semigroup and for the case of invertible diffusion coefficient covering cases not previously addressed by the literature. It is shown that the results apply to Hamilton-Jacobi-Bellman (HJB) equations associated to infinite horizon optimal stochastic control problems in infinite dimension and that, in particular, they cover examples of optimal boundary control of the heat equation that were not treatable with the approaches developed in the literature up to now

    Maximum Principle for Boundary Control Problems Arising in Optimal Investment with Vintage Capital

    Get PDF
    The paper concerns the study of the Pontryagin Maximum Principle for an infinite dimensional and infinite horizon boundary control problem for linear partial differential equations. The optimal control model has already been studied both in finite and infinite horizon with Dynamic Programming methods in a series of papers by the same author et al. [26, 27, 28, 29, 30]. Necessary and sufficient optimality conditions for open loop controls are established. Moreover the co-state variable is shown to coincide with the spatial gradient of the value function evaluated along the trajectory of the system, creating a parallel between Maximum Principle and Dynamic Programming. The abstract model applies, as recalled in one of the first sections, to optimal investment with vintage capital.Linear convex control, Boundary control, Hamilton–Jacobi–Bellman equations, Optimal investment problems, Vintage capital

    Optimal investment models with vintage capital: Dynamic Programming approach

    Get PDF
    The Dynamic Programming approach for a family of optimal investment models with vintage capital is here developed. The problem falls into the class of infinite horizon optimal control problems of PDE's with age structure that have been studied in various papers (see e.g. [11, 12], [30, 32]) either in cases when explicit solutions can be found or using Maximum Principle techniques. The problem is rephrased into an infinite dimensional setting, it is proven that the value function is the unique regular solution of the associated stationary Hamilton-Jacobi-Bellman equation, and existence and uniqueness of optimal feedback controls is derived. It is then shown that the optimal path is the solution to the closed loop equation. Similar results were proven in the case of finite horizon in [26][27]. The case of infinite horizon is more challenging as a mathematical problem, and indeed more interesting from the point of view of optimal investment models with vintage capital, where what mainly matters is the behavior of optimal trajectories and controls in the long run. The study of infinite horizon is performed through a nontrivial limiting procedure from the corresponding finite horizon problemsOptimal investment, vintage capital, age-structured systems, optimal control, dynamic programming, Hamilton-Jacobi-Bellman equations, linear convex control, boundary control

    Ergodic BSDEs under weak dissipative assumptions

    Get PDF
    In this paper we study ergodic backward stochastic differential equations (EBSDEs) dropping the strong dissipativity assumption needed in the previous work. In other words we do not need to require the uniform exponential decay of the difference of two solutions of the underlying forward equation, which, on the contrary, is assumed to be non degenerate. We show existence of solutions by use of coupling estimates for a non-degenerate forward stochastic differential equations with bounded measurable non-linearity. Moreover we prove uniqueness of "Markovian" solutions exploiting the recurrence of the same class of forward equations. Applications are then given to the optimal ergodic control of stochastic partial differential equations and to the associated ergodic Hamilton-Jacobi-Bellman equations

    Differentiability of backward stochastic differential equations in Hilbert spaces with monotone generators

    Get PDF
    The aim of the present paper is to study the regularity properties of the solution of a backward stochastic differential equation with a monotone generator in infinite dimension. We show some applications to the nonlinear Kolmogorov equation and to stochastic optimal control
    corecore