50 research outputs found
A distance for probability spaces, and long-term values in Markov Decision Processes and Repeated Games
Given a finite set , we denote by the set of probabilities
on and by the set of Borel probabilities on with finite
support. Studying a Markov Decision Process with partial information on
naturally leads to a Markov Decision Process with full information on . We
introduce a new metric on such that the transitions become
1-Lipschitz from to . In the first part of the article,
we define and prove several properties of the metric . Especially,
satisfies a Kantorovich-Rubinstein type duality formula and can be
characterized by using disintegrations. In the second part, we characterize the
limit values in several classes of "compact non expansive" Markov Decision
Processes. In particular we use the metric to characterize the limit
value in Partial Observation MDP with finitely many states and in Repeated
Games with an informed controller with finite sets of states and actions.
Moreover in each case we can prove the existence of a generalized notion of
uniform value where we consider not only the Ces\`aro mean when the number of
stages is large enough but any evaluation function
when the impatience is small
enough
Recursive games: Uniform value, Tauberian theorem and the Mertens conjecture ""
We study two-player zero-sum recursive games with a countable state space and
finite action spaces at each state. When the family of -stage values
is totally bounded for the uniform norm, we prove the
existence of the uniform value. Together with a result in Rosenberg and Vieille
(2000), we obtain a uniform Tauberian theorem for recursive games:
converges uniformly if and only if converges uniformly.
We apply our main result to finite recursive games with signals (where
players observe only signals on the state and on past actions). When the
maximizer is more informed than the minimizer, we prove the Mertens conjecture
. Finally, we deduce
the existence of the uniform value in finite recursive game with symmetric
information.Comment: 32 page
Existence de la valeur uniforme dans les jeux répétés
Dans cette thèse, nous nous intéressons à un modèle général de jeux répétés à deux joueurs et à somme nulle et en particulier au problème de l’existence de la valeur uniforme. Un jeu répété a une valeur uniforme s’il existe un paiement que les deux joueurs peuvent garantir, dans tous les jeux commençant aujourd’hui et suffisamment longs, indépendamment de la longueur du jeu. Dans un premier chapitre, on étudie les cas d’un seul joueur, appelé processus de décision Markovien partiellement observable, et des jeux où un joueur est parfaitement informé et contrôle la transition. Il est connu que ces jeux admettent une valeur uniforme. En introduisant une nouvelle distance sur les probabilités sur le simplexe de Rm, on montre l’existence d’une notion plus forte où les joueurs garantissent le même paiement sur n’importe quel intervalle de temps suffisamment long et non pas uniquement sur ceux commençant aujourd’hui. Dans les deux chapitres suivants, on montre l’existence de la valeur uniforme dans deux cas particuliers de jeux répétés : les jeux commutatifs dans le noir, où les joueurs n’observent pas l'état mais l’état est indépendant de l’ordre dans lequel les actions sont jouées, et les jeux avec un contrôleur plus informé, où un joueur est plus informé que l’autre joueur et contrôle l'évolution de l'état. Dans le dernier chapitre, on étudie le lien entre la convergence uniforme des valeurs des jeux en n étapes et le comportement asymptotique des stratégies optimales dans ces jeux en n étapes. Pour chaque n, on considère le paiement garanti pendant ln étapes avec 0 < l < 1 par les stratégies optimales pour n étapes et le comportement asymptotique lorsque n tend vers l’infini.In this dissertation, we consider a general model of two-player zero-sum repeated game and particularly the problem of the existence of a uniform value. A repeated game has a uniform value if both players can guarantee the same payoff in all games beginning today and sufficiently long, independently of the length of the game. In a first chapter, we focus on the cases of one player, called Partial Observation Markov Decision Processes, and of Repeated Games where one player is perfectly informed and controls the transitions. It is known that these games have a uniform value. By introducing a new metric on the probabilities over a simplex in Rm, we show the existence of a stronger notion, where the players guarantee the same payoff on all sufficiently long intervals of stages and not uniquely on the one starting today. In the next two chapters, we show the existence of the uniform value in two special models of repeated games : commutative repeated games in the dark, where the players do not observe the state variable, but the state is independent of the order the actions are played, and repeated games with a more informed controller, where one player controls the transition and has more information than the second player. In the last chapter, we study the link between the uniform convergence of the value of the n-stage games and the asymptotic behavior of the sequence of optimal strategies in the n-stage game. For each n, we consider n-stage optimal strategies and the payoff they are guaranteeing during the ln first stages with 0 < l < 1. We study the asymptotic of this payoff when n goes to infinity
Existence de la valeur uniforme dans les jeux répétés
Dans cette thèse, nous nous intéressons à un modèle général de jeux répétés à deux joueurs et à somme nulle et en particulier au problème de l’existence de la valeur uniforme. Un jeu répété a une valeur uniforme s’il existe un paiement que les deux joueurs peuvent garantir, dans tous les jeux commençant aujourd’hui et suffisamment longs, indépendamment de la longueur du jeu. Dans un premier chapitre, on étudie les cas d’un seul joueur, appelé processus de décision Markovien partiellement observable, et des jeux où un joueur est parfaitement informé et contrôle la transition. Il est connu que ces jeux admettent une valeur uniforme. En introduisant une nouvelle distance sur les probabilités sur le simplexe de Rm, on montre l’existence d’une notion plus forte où les joueurs garantissent le même paiement sur n’importe quel intervalle de temps suffisamment long et non pas uniquement sur ceux commençant aujourd’hui. Dans les deux chapitres suivants, on montre l’existence de la valeur uniforme dans deux cas particuliers de jeux répétés : les jeux commutatifs dans le noir, où les joueurs n’observent pas l'état mais l’état est indépendant de l’ordre dans lequel les actions sont jouées, et les jeux avec un contrôleur plus informé, où un joueur est plus informé que l’autre joueur et contrôle l'évolution de l'état. Dans le dernier chapitre, on étudie le lien entre la convergence uniforme des valeurs des jeux en n étapes et le comportement asymptotique des stratégies optimales dans ces jeux en n étapes. Pour chaque n, on considère le paiement garanti pendant ln étapes avec 0 < l < 1 par les stratégies optimales pour n étapes et le comportement asymptotique lorsque n tend vers l’infini.In this dissertation, we consider a general model of two-player zero-sum repeated game and particularly the problem of the existence of a uniform value. A repeated game has a uniform value if both players can guarantee the same payoff in all games beginning today and sufficiently long, independently of the length of the game. In a first chapter, we focus on the cases of one player, called Partial Observation Markov Decision Processes, and of Repeated Games where one player is perfectly informed and controls the transitions. It is known that these games have a uniform value. By introducing a new metric on the probabilities over a simplex in Rm, we show the existence of a stronger notion, where the players guarantee the same payoff on all sufficiently long intervals of stages and not uniquely on the one starting today. In the next two chapters, we show the existence of the uniform value in two special models of repeated games : commutative repeated games in the dark, where the players do not observe the state variable, but the state is independent of the order the actions are played, and repeated games with a more informed controller, where one player controls the transition and has more information than the second player. In the last chapter, we study the link between the uniform convergence of the value of the n-stage games and the asymptotic behavior of the sequence of optimal strategies in the n-stage game. For each n, we consider n-stage optimal strategies and the payoff they are guaranteeing during the ln first stages with 0 < l < 1. We study the asymptotic of this payoff when n goes to infinity
Asymptotic Properties of Optimal Trajectories in Dynamic Programming
We prove in a dynamic programming framework that uniform convergence of the
finite horizon values implies that asymptotically the average accumulated
payoff is constant on optimal trajectories. We analyze and discuss several
possible extensions to two-person games.Comment: 9 page
On finite-time ruin probabilities with reinsurance cycles influenced by large claims
Market cycles play a great role in reinsurance. Cycle transitions are not independent from the claim arrival process : a large claim or a high number of claims may accelerate cycle transitions. To take this into account, a semi-Markovian risk model is proposed and analyzed. A refined Erlangization method is developed to compute the finite-time ruin probability of a reinsurance company. As this model needs the claim amounts to be Phase-type distributed, we explain how to fit mixtures of Erlang distributions to long-tailed distributions. Numerical applications and comparisons to results obtained from simulation methods are given. The impact of dependency between claim amounts and phase changes is studied.
Weighted Average-convexity and Cooperative Games
We generalize the notion of convexity and average-convexity to the notion of
weighted average-convexity. We show several results on the relation between
weighted average-convexity and cooperative games. First, we prove that if a
game is weighted average-convex, then the corresponding weighted Shapley value
is in the core. Second, we exhibit necessary conditions for a communication
TU-game to preserve the weighted average-convexity. Finally, we provide a
complete characterization when the underlying graph is a priority decreasing
tree
Folk theorems in repeated games with switching costs
We study how switching costs affect the subgame perfect equilibria in repeated games. We
show that (i) the Folk Theorem holds whenever the players are patient enough; (ii) the set of
equilibrium payoffs is obtained by considering the payoffs of a simple one-shot auxiliary game;
and (iii) the switching costs have a negative impact on a player in the infinitely undiscounted
repeated game but can be beneficial for him in a finitely repeated game or in a discounted game
On finite-time ruin probabilities with reinsurance cycles influenced by large claims
International audienceMarket cycles play a great role in reinsurance. Cycle transitions are not independent from the claim arrival process : a large claim or a high number of claims may accelerate cycle transitions. To take this into account, a semi-Markovian risk model is proposed and analyzed. A refined Erlangization method is developed to compute the finite-time ruin probability of a reinsurance company. As this model needs the claim amounts to be Phase-type distributed, we explain how to fit mixtures of Erlang distributions to long-tailed distributions. Numerical applications and comparisons to results obtained from simulation methods are given. The impact of dependency between claim amounts and phase changes is studied