1,496 research outputs found

    Mild solutions of semilinear elliptic equations in Hilbert spaces

    Full text link
    This paper extends the theory of regular solutions (C1C^1 in a suitable sense) for a class of semilinear elliptic equations in Hilbert spaces. The notion of regularity is based on the concept of GG-derivative, which is introduced and discussed. A result of existence and uniqueness of solutions is stated and proved under the assumption that the transition semigroup associated to the linear part of the equation has a smoothing property, that is, it maps continuous functions into GG-differentiable ones. The validity of this smoothing assumption is fully discussed for the case of the Ornstein-Uhlenbeck transition semigroup and for the case of invertible diffusion coefficient covering cases not previously addressed by the literature. It is shown that the results apply to Hamilton-Jacobi-Bellman (HJB) equations associated to infinite horizon optimal stochastic control problems in infinite dimension and that, in particular, they cover examples of optimal boundary control of the heat equation that were not treatable with the approaches developed in the literature up to now

    Path-dependent Hamilton-Jacobi equations in infinite dimensions

    Full text link
    We propose notions of minimax and viscosity solutions for a class of fully nonlinear path-dependent PDEs with nonlinear, monotone, and coercive operators on Hilbert space. Our main result is well-posedness (existence, uniqueness, and stability) for minimax solutions. A particular novelty is a suitable combination of minimax and viscosity solution techniques in the proof of the comparison principle. One of the main difficulties, the lack of compactness in infinite-dimensional Hilbert spaces, is circumvented by working with suitable compact subsets of our path space. As an application, our theory makes it possible to employ the dynamic programming approach to study optimal control problems for a fairly general class of (delay) evolution equations in the variational framework. Furthermore, differential games associated to such evolution equations can be investigated following the Krasovskii-Subbotin approach similarly as in finite dimensions.Comment: Final version, 53 pages, to appear in Journal of Functional Analysi

    Maximum Principle for Linear-Convex Boundary Control Problems applied to Optimal Investment with Vintage Capital

    Full text link
    The paper concerns the study of the Pontryagin Maximum Principle for an infinite dimensional and infinite horizon boundary control problem for linear partial differential equations. The optimal control model has already been studied both in finite and infinite horizon with Dynamic Programming methods in a series of papers by the same author, or by Faggian and Gozzi. Necessary and sufficient optimality conditions for open loop controls are established. Moreover the co-state variable is shown to coincide with the spatial gradient of the value function evaluated along the trajectory of the system, creating a parallel between Maximum Principle and Dynamic Programming. The abstract model applies, as recalled in one of the first sections, to optimal investment with vintage capital

    Hamilton Jacobi Bellman equations in infinite dimensions with quadratic and superquadratic Hamiltonian

    Full text link
    We consider Hamilton Jacobi Bellman equations in an inifinite dimensional Hilbert space, with quadratic (respectively superquadratic) hamiltonian and with continuous (respectively lipschitz continuous) final conditions. This allows to study stochastic optimal control problems for suitable controlled Ornstein Uhlenbeck process with unbounded control processes

    Maximum Principle for Boundary Control Problems Arising in Optimal Investment with Vintage Capital

    Get PDF
    The paper concerns the study of the Pontryagin Maximum Principle for an infinite dimensional and infinite horizon boundary control problem for linear partial differential equations. The optimal control model has already been studied both in finite and infinite horizon with Dynamic Programming methods in a series of papers by the same author et al. [26, 27, 28, 29, 30]. Necessary and sufficient optimality conditions for open loop controls are established. Moreover the co-state variable is shown to coincide with the spatial gradient of the value function evaluated along the trajectory of the system, creating a parallel between Maximum Principle and Dynamic Programming. The abstract model applies, as recalled in one of the first sections, to optimal investment with vintage capital.Linear convex control, Boundary control, Hamilton–Jacobi–Bellman equations, Optimal investment problems, Vintage capital
    corecore