87 research outputs found

    Monte Carlo Tree Search with Heuristic Evaluations using Implicit Minimax Backups

    Full text link
    Monte Carlo Tree Search (MCTS) has improved the performance of game engines in domains such as Go, Hex, and general game playing. MCTS has been shown to outperform classic alpha-beta search in games where good heuristic evaluations are difficult to obtain. In recent years, combining ideas from traditional minimax search in MCTS has been shown to be advantageous in some domains, such as Lines of Action, Amazons, and Breakthrough. In this paper, we propose a new way to use heuristic evaluations to guide the MCTS search by storing the two sources of information, estimated win rates and heuristic evaluations, separately. Rather than using the heuristic evaluations to replace the playouts, our technique backs them up implicitly during the MCTS simulations. These minimax values are then used to guide future simulations. We show that using implicit minimax backups leads to stronger play performance in Kalah, Breakthrough, and Lines of Action.Comment: 24 pages, 7 figures, 9 tables, expanded version of paper presented at IEEE Conference on Computational Intelligence and Games (CIG) 2014 conferenc

    MCTS-minimax hybrids with state evaluations

    Get PDF
    Monte-Carlo Tree Search (MCTS) has been found to show weaker play than minimax-based search in some tactical game domains. In order to combine the tactical strength of minimax and the strategic strength of MCTS, MCTS-minimax hybrids have been proposed in prior work. This arti

    Monte-Carlo tree search enhancements for one-player and two-player domains

    Get PDF

    Epaminondas: Exploring Combat Tactics

    Get PDF
    Epaminondas is a two-person, zero-sum strategy game that combines long-term strategic play with highly tactical move sequences. The game has two unique features that make it stand out from other games. The first feature is the creation of phalanxes, which are groups of pieces that can move as a whole unit. As the number of pieces in a phalanx increases, the mobility and capturing power of the phalanx also increases. The second feature differs from many other strategy games: when a player makes a crossing, a winning move in the game, the second player has an opportunity to respond. This paper presents strategies and heuristics used in a Min-Max Alpha-Beta agent that plays at a novice level. Furthermore, it defines the state-space and game-tree complexities for Epaminondas. Finally, a new version of MCTS is implemented that uses the Alpha-Beta heuristic function during node selection to guide MCTS to more promising areas of the search tree. Additionally, in an effort to overcome the MCTS tactical weakness, the MCTS player implements the Alpha-Beta search once the game reaches 15 turns. Results show that the added heuristic value and the switch to Alpha-Beta for endgame play, positively impact the performance of MCTS, surpassing novice Alpha-Beta win ratios at certain time intervals

    Application of the Monte-Carlo Tree Search to Multi-Action Turn-Based Games with Hidden Information

    Get PDF
    Traditional search algorithms struggle when applied to complex multi-action turn-based games. The introduction of hidden information further increases domain complexity. The Monte-Carlo Tree Search (MCTS) algorithm has previously been applied to multi-action turn-based games, but not multi-action turn-based games with hidden information. This thesis compares several Monte Carlo Tree Search (MCTS) extensions (Determinized/Perfect Information Monte Carlo, Multi-Observer Information Set MCTS, and Belief State MCTS) in TUBSTAP, an open-source multi-action turn-based game, modified to include hidden information via fog-of-war

    MONTE CARLO TREE SEARCH AND MINIMAX COMBINATION – APPLICATION OF SOLVING PROBLEMS IN THE GAME OF GO

    Get PDF
    Monte Carlo Tree Search (MCTS) has been successfully applied to a variety of games. Its best-first algorithm enables implementations without evaluation functions. Combined with Upper Confidence bounds applied to Trees (UCT), MCTS has an advantage over traditional depth-limited minimax search with alpha-beta pruning in games with high branching factors such as Go. However, minimax search with alpha-beta pruning still surpasses MCTS in domains like Chess. Studies show that MCTS does not detect shallow traps, where opponents can win within a few moves, as well as minimax search. Thus, minimax search performs better than MCTS in games like Chess, which can end instantly (king is captured). A combination of MCTS and minimax algorithm is proposed in this thesis to see the effectiveness of detecting shallow traps in Go problems

    Enhancements for Real-Time Monte-Carlo Tree Search in General Video Game Playing

    Get PDF
    General Video Game Playing (GVGP) is a field of Artificial Intelligence where agents play a variety of real-time video games that are unknown in advance. This limits the use of domain-specific heuristics. Monte-Carlo Tree Search (MCTS) is a search technique for game playing that does not rely on domain-specific knowledge. This paper discusses eight enhancements for MCTS in GVGP; Progressive History, N-Gram Selection Technique, Tree Reuse, Breadth-First Tree Initialization, Loss Avoidance, Novelty-Based Pruning, Knowledge-Based Evaluations, and Deterministic Game Detection. Some of these are known from existing literature, and are either extended or introduced in the context of GVGP, and some are novel enhancements for MCTS. Most enhancements are shown to provide statistically significant increases in win percentages when applied individually. When combined, they increase the average win percentage over sixty different games from 31.0% to 48.4% in comparison to a vanilla MCTS implementation, approaching a level that is competitive with the best agents of the GVG-AI competition in 2015

    On monte carlo tree search and reinforcement learning

    Get PDF
    Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved widespread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search

    The phenomenon of Decision Oscillation: a new consequence of pathology in Game Trees

    Get PDF
    Random minimaxing studies the consequences of using a random number for scoring the leaf nodes of a full width game tree and then computing the best move using the standard minimax procedure. Experiments in Chess showed that the strength of play increases as the depth of the lookahead is increased. Previous research by the authors provided a partial explanation of why random minimaxing can strengthen play by showing that, when one move dominates another move, then the dominating move is more likely to be chosen by minimax. This paper examines a special case of determining the move probability when domination does not occur. Specifically, we show that, under a uniform branching game tree model, whether the probability that one move is chosen rather than another depends not only on the branching factors of the moves involved, but also on whether the number of ply searched is odd or even. This is a new type of game tree pathology, where the minimax procedure will change its mind as to which move is best, independently of the true value of the game, and oscillate between moves as the depth of lookahead alternates between odd and even
    • …
    corecore