121,594 research outputs found
Intelligent Agents for the Game of Go
International audienceMonte-Carlo Tree Search (MCTS) is a very efficient recent technology for games and planning, par- ticularly in the high-dimensional case, when the number of time steps is moderate and when there is no natural evaluation function. Surprisingly, MCTS makes very little use of learning. In this paper, we present four techniques (ontologies, Bernstein races, Contextual Monte-Carlo and poolRave) for learning agents in Monte-Carlo Tree Search, and experiment them in difficult games and in particular, the game of Go
An intelligent Othello player combining machine learning and game specific heuristics
Artificial intelligence applications in board games have been around as early as the 1950\u27s, and computer programs have been developed for games including Checkers, Chess, and Go with varying results. Although general game-tree search algorithms have been designed to work on games meeting certain requirements (e.g. zero-sum, two-player, perfect or imperfect information, etc.), the best results, however, come from combining these with specific knowledge of game strategies. In this MS thesis, we present an intelligent Othello game player that combines game-specific heuristics with machine learning techniques in move selection. Five game specific heuristics, namely corner detection, killer move detection, blocking, blacklisting, and pattern recognition have been proposed. Some of these heuristics can be generalized to fit other games by removing the Othello specific components and replacing them with specific knowledge of the target game. For machine learning techniques, the normal Minimax algorithm along with a custom variation is used as a base. Genetic algorithms and neural networks are applied to learn the static evaluation function. The five game specific techniques (or a subset of) are to be executed first and if no move is found, Minimax game tree search is performed. All techniques and several subsets of them have been tested against three deterministic agents, one non-deterministic agent, and three human players of varying skill levels. The results show that the combined Othello player performs better in general. We present the study results on the basis of four main metrics: performance (percentage of games won), speed, predictability of opponent, and usage situation
Modelling Socially Intelligent Agents
The perspective of modelling agents rather than using them for a specificed purpose entails a difference in approach. In particular an emphasis on veracity as opposed to efficiency. An approach using evolving populations of mental models is described that goes some way to meet these concerns. It is then argued that social intelligence is not merely intelligence plus interaction but should allow for individual relationships to develop between agents. This means that, at least, agents must be able to distinguish, identify, model and address other agents, either individually or in groups. In other words that purely homogeneous interaction is insufficient. Two example models are described that illustrate these concerns, the second in detail where agents act and communicate socially, where this is determined by the evolution of their mental models. Finally some problems that arise in the interpretation of such simulations is discussed
The Computational Complexity of Angry Birds
The physics-based simulation game Angry Birds has been heavily researched by
the AI community over the past five years, and has been the subject of a
popular AI competition that is currently held annually as part of a leading AI
conference. Developing intelligent agents that can play this game effectively
has been an incredibly complex and challenging problem for traditional AI
techniques to solve, even though the game is simple enough that any human
player could learn and master it within a short time. In this paper we analyse
how hard the problem really is, presenting several proofs for the computational
complexity of Angry Birds. By using a combination of several gadgets within
this game's environment, we are able to demonstrate that the decision problem
of solving general levels for different versions of Angry Birds is either
NP-hard, PSPACE-hard, PSPACE-complete or EXPTIME-hard. Proof of NP-hardness is
by reduction from 3-SAT, whilst proof of PSPACE-hardness is by reduction from
True Quantified Boolean Formula (TQBF). Proof of EXPTIME-hardness is by
reduction from G2, a known EXPTIME-complete problem similar to that used for
many previous games such as Chess, Go and Checkers. To the best of our
knowledge, this is the first time that a single-player game has been proven
EXPTIME-hard. This is achieved by using stochastic game engine dynamics to
effectively model the real world, or in our case the physics simulator, as the
opponent against which we are playing. These proofs can also be extended to
other physics-based games with similar mechanics.Comment: 55 Pages, 39 Figure
Helping AI to Play Hearthstone: AAIA'17 Data Mining Challenge
This paper summarizes the AAIA'17 Data Mining Challenge: Helping AI to Play
Hearthstone which was held between March 23, and May 15, 2017 at the Knowledge
Pit platform. We briefly describe the scope and background of this competition
in the context of a more general project related to the development of an AI
engine for video games, called Grail. We also discuss the outcomes of this
challenge and demonstrate how predictive models for the assessment of player's
winning chances can be utilized in a construction of an intelligent agent for
playing Hearthstone. Finally, we show a few selected machine learning
approaches for modeling state and action values in Hearthstone. We provide
evaluation for a few promising solutions that may be used to create more
advanced types of agents, especially in conjunction with Monte Carlo Tree
Search algorithms.Comment: Federated Conference on Computer Science and Information Systems,
Prague (FedCSIS-2017) (Prague, Czech Republic
- …