28 research outputs found
Tauberian theorem for value functions
For two-person dynamic zero-sum games (both discrete and continuous
settings), we investigate the limit of value functions of finite horizon games
with long run average cost as the time horizon tends to infinity and the limit
of value functions of -discounted games as the discount tends to zero.
We prove that the Dynamic Programming Principle for value functions directly
leads to the Tauberian Theorem---that the existence of a uniform limit of the
value functions for one of the families implies that the other one also
uniformly converges to the same limit. No assumptions on strategies are
necessary.
To this end, we consider a mapping that takes each payoff to the
corresponding value function and preserves the sub- and super- optimality
principles (the Dynamic Programming Principle). With their aid, we obtain
certain inequalities on asymptotics of sub- and super- solutions, which lead to
the Tauberian Theorem. In particular, we consider the case of differential
games without relying on the existence of the saddle point; a very simple
stochastic game model is also considered
On asymptotic value for dynamic games with saddle point
The paper is concerned with two-person games with saddle point. We
investigate the limits of value functions for long-time-average payoff,
discounted average payoff, and the payoff that follows a probability density.
Most of our assumptions restrict the dynamics of games. In particular, we
assume the closedness of strategies under concatenation. It is also necessary
for the value function to satisfy Bellman's optimality principle, even if in a
weakened, asymptotic sense.
We provide two results. The first one is a uniform Tauber result for games:
if the value functions for long-time-average payoff converge uniformly, then
there exists the uniform limit for probability densities from a sufficiently
broad set; moreover, these limits coincide. The second one is the uniform Abel
result: if a uniform limit for self-similar densities exists, then the uniform
limit for long-time average payoff also exists, and they coincide.Comment: for SIAM CT1
Necessity of vanishing shadow price in infinite horizon control problems
This paper investigates the necessary optimality conditions for uniformly
overtaking optimal control on infinite horizon in the free end case. %with free
right endpoint. In the papers of S.M.Aseev, A.V.Kryazhimskii, V.M.Veliov,
K.O.Besov there was suggested the boundary condition for equations of the
Pontryagin Maximum Principle. Each optimal process corresponds to a unique
solution satisfying the boundary condition. Following A.Seierstad's idea, in
this paper we prove a more general geometric variety of that boundary
condition. We show that this condition is necessary for uniformly overtaking
optimal control on infinite horizon in the free end case. A number of
assumptions under which this condition selects a unique Lagrange multiplier is
obtained. The results are applicable to general non-stationary systems and the
optimal objective value is not necessarily finite. Some examples are discussed