42 research outputs found
AWESOME: A General Multiagent Learning Algorithm that Converges in Self-Play and Learns a Best Response Against Stationary Opponents
A satisfactory multiagent learning algorithm should, {\em at a minimum},
learn to play optimally against stationary opponents and converge to a Nash
equilibrium in self-play. The algorithm that has come closest, WoLF-IGA, has
been proven to have these two properties in 2-player 2-action repeated
games--assuming that the opponent's (mixed) strategy is observable. In this
paper we present AWESOME, the first algorithm that is guaranteed to have these
two properties in {\em all} repeated (finite) games. It requires only that the
other players' actual actions (not their strategies) can be observed at each
step. It also learns to play optimally against opponents that {\em eventually
become} stationary. The basic idea behind AWESOME ({\em Adapt When Everybody is
Stationary, Otherwise Move to Equilibrium}) is to try to adapt to the others'
strategies when they appear stationary, but otherwise to retreat to a
precomputed equilibrium strategy. The techniques used to prove the properties
of AWESOME are fundamentally different from those used for previous algorithms,
and may help in analyzing other multiagent learning algorithms also
ATTac-2000: An Adaptive Autonomous Bidding Agent
The First Trading Agent Competition (TAC) was held from June 22nd to July
8th, 2000. TAC was designed to create a benchmark problem in the complex domain
of e-marketplaces and to motivate researchers to apply unique approaches to a
common task. This article describes ATTac-2000, the first-place finisher in
TAC. ATTac-2000 uses a principled bidding strategy that includes several
elements of adaptivity. In addition to the success at the competition, isolated
empirical results are presented indicating the robustness and effectiveness of
ATTac-2000's adaptive strategy