For a sequence of dynamic optimization problems, we aim at discussing a
notion of consistency over time. This notion can be informally introduced as
follows. At the very first time step t0, the decision maker formulates an
optimization problem that yields optimal decision rules for all the forthcoming
time step t0,t1,...,T; at the next time step t1, he is able to
formulate a new optimization problem starting at time t1 that yields a new
sequence of optimal decision rules. This process can be continued until final
time T is reached. A family of optimization problems formulated in this way
is said to be time consistent if the optimal strategies obtained when solving
the original problem remain optimal for all subsequent problems. The notion of
time consistency, well-known in the field of Economics, has been recently
introduced in the context of risk measures, notably by Artzner et al. (2007)
and studied in the Stochastic Programming framework by Shapiro (2009) and for
Markov Decision Processes (MDP) by Ruszczynski (2009). We here link this notion
with the concept of "state variable" in MDP, and show that a significant class
of dynamic optimization problems are dynamically consistent, provided that an
adequate state variable is chosen