21 research outputs found
Rolling Horizon NEAT for General Video Game Playing
This paper presents a new Statistical Forward Planning (SFP) method, Rolling
Horizon NeuroEvolution of Augmenting Topologies (rhNEAT). Unlike traditional
Rolling Horizon Evolution, where an evolutionary algorithm is in charge of
evolving a sequence of actions, rhNEAT evolves weights and connections of a
neural network in real-time, planning several steps ahead before returning an
action to execute in the game. Different versions of the algorithm are explored
in a collection of 20 GVGAI games, and compared with other SFP methods and
state of the art results. Although results are overall not better than other
SFP methods, the nature of rhNEAT to adapt to changing game features has
allowed to establish new state of the art records in games that other methods
have traditionally struggled with. The algorithm proposed here is general and
introduces a new way of representing information within rolling horizon
evolution techniques.Comment: 8 pages, 5 figures, accepted for publication in IEEE Conference on
Games (CoG) 202
TAG: A Tabletop Games Framework
Esta ponencia forma parte de : 16th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE 2020)Tabletop games come in a variety of forms, including board
games, card games, and dice games. In recent years, their
complexity has considerably increased, with many components, rules that change dynamically through the game, diverse
player roles, and a series of control parameters that influence
a game’s balance. As such, they also encompass novel and
intricate challenges for Artificial Intelligence methods, yet research largely focuses on classical board games such as chess
and Go. We introduce in this work the Tabletop Games (TAG)
framework, which promotes research into general AI in modern tabletop games, facilitating the implementation of new
games and AI players, while providing analytics to capture the
complexities of the challenges proposed. We include preliminary results with sample AI players, showing some moderate
success, with plenty of room for improvement, and discuss
further developments and new research direction
General Video Game for 2 players: Framework and competition
This paper presents a new track of the General Video Game AI competition for generic Artificial Intelligence agents, which features both competitive and cooperative real time stochastic two player games. The aim of the competition is to directly test agents against each other in more complex and dynamic environments, where there is an extra uncertainty in a game, consisting of the behaviour of the other player. The framework, server functionality and general competition setup are analysed and the results of the experiments with several sample controllers are presented. The results indicate that currently Open Loop Monte Carlo Tree Search is the overall leading algorithm on this set of games
Self-adaptive MCTS for General Video Game Playing
Monte-carlo tree search (mcts) has shown particular success in general game playing (ggp) and general video game playing (gvgp) and many enhancements and variants have been developed. Recently, an on-line adaptive parameter tuning mechanism for mcts agents has been proposed that almost achieves the same performance as off-line tuning in ggp.in this paper we apply the same approach to gvgp and use the popular general video game ai (gvgai) framework, in which the time allowed to make a decision is only 40 ms. We design three self-adaptive mcts (sa-mcts) agents that optimize on-line the parameters of a standard non-self-adaptive mcts agent of gvgai. The three agents select the parameter values using naïve monte-carlo, an evolutionary algorithm and an n-tuple bandit evolutionary algorithm respectively, and are tested on 20 single-player games of gvgai.the sa-mcts agents achieve more robust results on the tested games. With the same time setting, they perform similarly to the baseline standard mcts agent in the games for which the baseline agent performs well, and significantly improve the win rate in the games for which the baseline agent performs poorly. As validation, we also test the performance of non-self-adaptive mcts instances that use the most sampled parameter settings during the on-line tuning of each of the three sa-mcts agents for each game. Results show that these parameter settings improve the win rate on the games wait for breakfast and escape by 4 times and 150 times, respectively