7 research outputs found
N—Person Stochastic Games: Extensions of the Finite State Space Case and Correlation
In this chapter, we present a framework for m-person stochastic games with an infinite state space. Our main purpose is to present a correlated equilibrium theorem proved by Nowak and Raghavan [42] for discounted stochastic games with a measurable state space, where the correlation o
Nonzero-sum Stochastic Games
This paper treats of stochastic games. We focus on nonzero-sum games and provide a detailed survey of
selected recent results. In Section 1, we consider stochastic Markov games. A correlation of strategies of the players, involving ``public signals'', is described, and a correlated equilibrium theorem proved recently by Nowak and Raghavan for discounted stochastic games with general state space is presented. We also report an extension of this result to a class of undiscounted stochastic games, satisfying some uniform ergodicity condition.
Stopping games are related to stochastic Markov games. In
Section 2, we describe a version of Dynkin's game related to
observation of a Markov process with random assignment mechanism of states to the players. Some recent contributions of the second author in this area are reported. The paper also contains a brief overview of the theory of nonzero-sum stochastic games and stopping games which is very far from being complete
Über die optimalität von strategien in stochastisehen dynamischen minimax-entscheidungsmodellen II
Monotonicity of optimal policies in a zero sum game: a flow control model
The purpose of this paper is to illustrate how value iteration can be used in a zero-sum game to obtain structural results on the optimal (equilibrium) value and policy. This is done through the following example. We consider the problem of dynamic flow control of arriving customers into a finite buffer. The service rate may depend on the state of the system, may change in time and is unknown to the controller. The goal of the controller is to design a policy that guarantees the best performance under the worst case service conditions. The cost is composed of a holding cost, a cost for rejecting customers and a cost that depends on the quality of the service. We consider both discounted and expected average cost. The problem is studied in the framework of zero-sum Markov games where the server, called player 1, is assumed to play against the flow controller, called player 2. Each player is assumed to have the information of all previous actions of both players as well as the current and past states of the system. We show that there exists an optimal policy for both players which is stationary (that does not depend on the time). A value iteration algorithm is used to obtain monotonicity properties of the optimal policies. For the case that only two actions are available to one of the players, we show that his optimal policy is of a threshold type, and optimal policies exist for both players that may need randomization in at most one state
Conditionally Stationary Equilibria in Discounted Dynamic Games
Dynamic game, Subgame perfect equilibrium, Penal code, Simple strategy, Stationary strategy,