22 research outputs found
Fictitious Play Outperforms Counterfactual Regret Minimization
We compare the performance of two popular algorithms, fictitious play and
counterfactual regret minimization, in approximating Nash equilibrium in
multiplayer games. Despite recent success of counterfactual regret minimization
in multiplayer poker and conjectures of its superiority, we show that
fictitious play leads to improved Nash equilibrium approximation over a variety
of game classes and sizes.Comment: Fixed a bug in the 5-player CFR implementation from prior version and
reran the 5-player experiment
Economics of Conflict and Terrorism
This book contributes to the literature on conflict and terrorism through a selection of articles that deal with theoretical, methodological and empirical issues related to the topic. The papers study important problems, are original in their approach and innovative in the techniques used. This will be useful for researchers in the fields of game theory, economics and political sciences
Recommended from our members
Bayesian opponent modeling in adversarial game environments.
This thesis investigates the use of Bayesian analysis upon an opponent¿s behaviour in order to determine the desired goals or strategy used by a given adversary. A terrain analysis approach utilising the A* algorithm is investigated, where a probability distribution between discrete behaviours of an opponent relative to a set of possible goals is generated. The Bayesian analysis of agent behaviour accurately determines the intended goal of an opponent agent, even when the opponent¿s actions are altered randomly. The environment of Poker is introduced and abstracted for ease of analysis. Bayes¿ theorem is used to generate an effective opponent model, categorizing behaviour according to its similarity with known styles of opponent. The accuracy of Bayes¿ rule yields a notable improvement in the performance of an agent once an opponent¿s style is understood. A hybrid of the Bayesian style predictor and a neuroevolutionary approach is shown to lead to effective dynamic play, in comparison to agents that do not use an opponent model. The use of recurrence in evolved networks is also shown to improve the performance and generalizability of an agent in a multiplayer environment. These strategies are then employed in the full-scale environment of Texas Hold¿em, where a betting round-based approach proves useful in determining and counteracting an opponent¿s play. It is shown that the use of opponent models, with the adaptive benefits of neuroevolution aid the performance of an agent, even when the behaviour of an opponent does not necessarily fit within the strict definitions of opponent ¿style¿.Engineering and Physical Sciences Research Council (EPSRC
BNAIC 2008:Proceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference
Foundations of Trusted Autonomy
Trusted Autonomy; Automation Technology; Autonomous Systems; Self-Governance; Trusted Autonomous Systems; Design of Algorithms and Methodologie