8 research outputs found

    A Tauberian theorem for nonexpansive operators and applications to zero-sum stochastic games

    Full text link
    We prove a Tauberian theorem for nonexpansive operators, and apply it to the model of zero-sum stochastic game. Under mild assumptions, we prove that the value of the lambda-discounted game v_{lambda} converges uniformly when lambda goes to 0 if and only if the value of the n-stage game v_n converges uniformly when n goes to infinity. This generalizes the Tauberian theorem of Lehrer and Sorin (1992) to the two-player zero-sum case. We also provide the first example of a stochastic game with public signals on the state and perfect observation of actions, with finite state space, signal sets and action sets, in which for some initial state k_1 known by both players, (v_{lambda}(k_1)) and (v_n(k_1)) converge to distinct limits

    Markov games with frequent actions and incomplete information

    Full text link
    We study a two-player, zero-sum, stochastic game with incomplete information on one side in which the players are allowed to play more and more frequently. The informed player observes the realization of a Markov chain on which the payoffs depend, while the non-informed player only observes his opponent's actions. We show the existence of a limit value as the time span between two consecutive stages vanishes; this value is characterized through an auxiliary optimization problem and as the solution of an Hamilton-Jacobi equation

    The Value of Markov Chain Games with Incomplete Information on Both Sides

    No full text
    corecore