191 research outputs found
Equilibrium points for Optimal Investment with Vintage Capital
The paper concerns the study of equilibrium points, namely the stationary
solutions to the closed loop equation, of an infinite dimensional and infinite
horizon boundary control problem for linear partial differential equations.
Sufficient conditions for existence of equilibrium points in the general case
are given and later applied to the economic problem of optimal investment with
vintage capital. Explicit computation of equilibria for the economic problem in
some relevant examples is also provided. Indeed the challenging issue here is
showing that a theoretical machinery, such as optimal control in infinite
dimension, may be effectively used to compute solutions explicitly and easily,
and that the same computation may be straightforwardly repeated in examples
yielding the same abstract structure. No stability result is instead provided:
the work here contained has to be considered as a first step in the direction
of studying the behavior of optimal controls and trajectories in the long run
Maximum Principle for Linear-Convex Boundary Control Problems applied to Optimal Investment with Vintage Capital
The paper concerns the study of the Pontryagin Maximum Principle for an
infinite dimensional and infinite horizon boundary control problem for linear
partial differential equations. The optimal control model has already been
studied both in finite and infinite horizon with Dynamic Programming methods in
a series of papers by the same author, or by Faggian and Gozzi. Necessary and
sufficient optimality conditions for open loop controls are established.
Moreover the co-state variable is shown to coincide with the spatial gradient
of the value function evaluated along the trajectory of the system, creating a
parallel between Maximum Principle and Dynamic Programming. The abstract model
applies, as recalled in one of the first sections, to optimal investment with
vintage capital
Controlled diffusion processes
This article gives an overview of the developments in controlled diffusion
processes, emphasizing key results regarding existence of optimal controls and
their characterization via dynamic programming for a variety of cost criteria
and structural assumptions. Stochastic maximum principle and control under
partial observations (equivalently, control of nonlinear filters) are also
discussed. Several other related topics are briefly sketched.Comment: Published at http://dx.doi.org/10.1214/154957805100000131 in the
Probability Surveys (http://www.i-journals.org/ps/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Model order reduction approaches for infinite horizon optimal control problems via the HJB equation
We investigate feedback control for infinite horizon optimal control problems
for partial differential equations. The method is based on the coupling between
Hamilton-Jacobi-Bellman (HJB) equations and model reduction techniques. It is
well-known that HJB equations suffer the so called curse of dimensionality and,
therefore, a reduction of the dimension of the system is mandatory. In this
report we focus on the infinite horizon optimal control problem with quadratic
cost functionals. We compare several model reduction methods such as Proper
Orthogonal Decomposition, Balanced Truncation and a new algebraic Riccati
equation based approach. Finally, we present numerical examples and discuss
several features of the different methods analyzing advantages and
disadvantages of the reduction methods
The Master Equation for Large Population Equilibriums
We use a simple N-player stochastic game with idiosyncratic and common noises
to introduce the concept of Master Equation originally proposed by Lions in his
lectures at the Coll\`ege de France. Controlling the limit N tends to the
infinity of the explicit solution of the N-player game, we highlight the
stochastic nature of the limit distributions of the states of the players due
to the fact that the random environment does not average out in the limit, and
we recast the Mean Field Game (MFG) paradigm in a set of coupled Stochastic
Partial Differential Equations (SPDEs). The first one is a forward stochastic
Kolmogorov equation giving the evolution of the conditional distributions of
the states of the players given the common noise. The second is a form of
stochastic Hamilton Jacobi Bellman (HJB) equation providing the solution of the
optimization problem when the flow of conditional distributions is given. Being
highly coupled, the system reads as an infinite dimensional Forward Backward
Stochastic Differential Equation (FBSDE). Uniqueness of a solution and its
Markov property lead to the representation of the solution of the backward
equation (i.e. the value function of the stochastic HJB equation) as a
deterministic function of the solution of the forward Kolmogorov equation,
function which is usually called the decoupling field of the FBSDE. The
(infinite dimensional) PDE satisfied by this decoupling field is identified
with the \textit{master equation}. We also show that this equation can be
derived for other large populations equilibriums like those given by the
optimal control of McKean-Vlasov stochastic differential equations. The paper
is written more in the style of a review than a technical paper, and we spend
more time and energy motivating and explaining the probabilistic interpretation
of the Master Equation, than identifying the most general set of assumptions
under which our claims are true
- …