7,645 research outputs found

    Robust Opponent Modeling in Real-Time Strategy Games using Bayesian Networks

    Get PDF
    Opponent modeling is a key challenge in Real-Time Strategy (RTS) games as the environment is adversarial in these games, and the player cannot predict the future actions of her opponent. Additionally, the environment is partially observable due to the fog of war. In this paper, we propose an opponent model which is robust to the observation noise existing due to the fog of war. In order to cope with the uncertainty existing in these games, we design a Bayesian network whose parameters are learned from an unlabeled game-logs dataset; so it does not require a human expert’s knowledge. We evaluate our model on StarCraft which is considered as a unified test-bed in this domain. The model is compared with that proposed by Synnaeve and Bessiere. Experimental results on recorded games of human players show that the proposed model can predict the opponent’s future decisions more effectively. Using this model, it is possible to create an adaptive game intelligence algorithm applicable to RTS games, where the concept of build order (the order of building construction) exists

    Learning with Opponent-Learning Awareness

    Full text link
    Multi-agent settings are quickly gathering importance in machine learning. This includes a plethora of recent work on deep multi-agent reinforcement learning, but also can be extended to hierarchical RL, generative adversarial networks and decentralised optimisation. In all these settings the presence of multiple learning agents renders the training problem non-stationary and often leads to unstable training or undesired final results. We present Learning with Opponent-Learning Awareness (LOLA), a method in which each agent shapes the anticipated learning of the other agents in the environment. The LOLA learning rule includes a term that accounts for the impact of one agent's policy on the anticipated parameter update of the other agents. Results show that the encounter of two LOLA agents leads to the emergence of tit-for-tat and therefore cooperation in the iterated prisoners' dilemma, while independent learning does not. In this domain, LOLA also receives higher payouts compared to a naive learner, and is robust against exploitation by higher order gradient-based methods. Applied to repeated matching pennies, LOLA agents converge to the Nash equilibrium. In a round robin tournament we show that LOLA agents successfully shape the learning of a range of multi-agent learning algorithms from literature, resulting in the highest average returns on the IPD. We also show that the LOLA update rule can be efficiently calculated using an extension of the policy gradient estimator, making the method suitable for model-free RL. The method thus scales to large parameter and input spaces and nonlinear function approximators. We apply LOLA to a grid world task with an embedded social dilemma using recurrent policies and opponent modelling. By explicitly considering the learning of the other agent, LOLA agents learn to cooperate out of self-interest. The code is at github.com/alshedivat/lola

    Poker as a Skill Game: Rational vs Irrational Behaviors

    Full text link
    In many countries poker is one of the most popular card games. Although each variant of poker has its own rules, all involve the use of money to make the challenge meaningful. Nowadays, in the collective consciousness, some variants of poker are referred to as games of skill, others as gambling. A poker table can be viewed as a psychology lab, where human behavior can be observed and quantified. This work provides a preliminary analysis of the role of rationality in poker games, using a stylized version of Texas Hold'em. In particular, we compare the performance of two different kinds of players, i.e., rational vs irrational players, during a poker tournament. Results show that these behaviors (i.e., rationality and irrationality) affect both the outcomes of challenges and the way poker should be classified.Comment: 15 pages, 5 figure
    • …
    corecore