26,488 research outputs found
Dynamic consistency for Stochastic Optimal Control problems
For a sequence of dynamic optimization problems, we aim at discussing a
notion of consistency over time. This notion can be informally introduced as
follows. At the very first time step , the decision maker formulates an
optimization problem that yields optimal decision rules for all the forthcoming
time step ; at the next time step , he is able to
formulate a new optimization problem starting at time that yields a new
sequence of optimal decision rules. This process can be continued until final
time is reached. A family of optimization problems formulated in this way
is said to be time consistent if the optimal strategies obtained when solving
the original problem remain optimal for all subsequent problems. The notion of
time consistency, well-known in the field of Economics, has been recently
introduced in the context of risk measures, notably by Artzner et al. (2007)
and studied in the Stochastic Programming framework by Shapiro (2009) and for
Markov Decision Processes (MDP) by Ruszczynski (2009). We here link this notion
with the concept of "state variable" in MDP, and show that a significant class
of dynamic optimization problems are dynamically consistent, provided that an
adequate state variable is chosen
Assessing farmer behaviour as affected by policy and technological innovations: bio-economic farm models
Farm Management,
- …