Location of Repository

### Abstract

In this lecture, we develop the notion of “adaptive learning ” as proposed by Milgrom and Roberts [1]. Although the learning definition they give is of interest in its own right, it primarily derives power in the case of dominance solvable games, or for games where there is a straightforward characterization of the set of strategies surviving iterated strict dominance (hereafter ISD). Throughout the lecture we consider a finite N-player game, where each player i has a finite pure action set Ai; let A = ∏ i Ai. We let ai denote a pure action for player i, and let si ∈ ∆(Ai) denote a mixed action for player i. We will typically view si as a vector in R Ai, with si(ai) equal to the probability that player i places on ai. We let Πi(a) denote the payoff to player i when the composite pure action vector is a, and by an abuse of notation also let Πi(s) denote the expected payoff to player i when the composite mixed action vector is s. We let BRi(s−i) denote the best response mapping of player i; here s−i is the composite mixed action vector of players other than i. We will need some additional notation involving ISD. Given T ⊂ ∏ i Ai, we define Ui(T) as follows: Ui(T) = {ai ∈ Ai: for all si ∈ ∆(Ai), there exists a−i ∈ T−i s.t. Πi(ai,a−i) ≥ Πi(si,a−i)}. Here T−i = ∏ j=i Tj, where Tj is the projection of T onto Aj. In other words, Ui(T) is the set of pure strategies of player i that are not dominated by any mixed strategy, given that all other players play using action vectors in T−i. We let U(T) = ∏ i Ui(T). We also use Uk (T) to denote the set of pure strategies remaining after k applications of U to the set T, with U0 equal to the identity map. It is straightforward to check the following claims (see Lemmas 1 and 2 of [1])

Year: 2007
OAI identifier: oai:CiteSeerX.psu:10.1.1.352.5197
Provided by: CiteSeerX