research

Playing Multi-Action Adversarial Games: Online Evolutionary Planning versus Tree Search

Abstract

We address the problem of playing turn-based multi-action adversarial games, which include many strategy games with extremely high branching factors as players take multiple actions each turn. This leads to the breakdown of standard tree search methods, including Monte Carlo Tree Search (MCTS), as they become unable to reach a sufficient depth in the game tree. In this paper we introduce Online Evolutionary Planning (OEP) to address this challenge, which searches for combinations of actions to perform during a single turn guided by a fitness function that evaluates the quality of a particular state. We compare OEP to different MCTS variations that constrain the exploration to deal with the high branching factor in the turn-based multi-action game Hero Academy. While the constrained MCTS variations outperform the vanilla MCTS implementation by a large margin, OEP is able to search the space of plans more efficiently than any of the tested tree search methods as it has a relative advantage when the number of actions per turn increases

    Similar works