929 research outputs found

    A Survey of Monte Carlo Tree Search Methods

    Get PDF
    Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work

    The effect of simulation bias on action selection in Monte Carlo Tree Search

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master of Science. August 2016.Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in recent years. It combines a traditional tree-search approach with Monte Carlo simulations, using the outcome of these simulations (also known as playouts or rollouts) to evaluate states in a look-ahead tree. That MCTS does not require an evaluation function makes it particularly well-suited to the game of Go — seen by many to be chess’s successor as a grand challenge of artificial intelligence — with MCTS-based agents recently able to achieve expert-level play on 19×19 boards. Furthermore, its domain-independent nature also makes it a focus in a variety of other fields, such as Bayesian reinforcement learning and general game-playing. Despite the vast amount of research into MCTS, the dynamics of the algorithm are still not yet fully understood. In particular, the effect of using knowledge-heavy or biased simulations in MCTS still remains unknown, with interesting results indicating that better-informed rollouts do not necessarily result in stronger agents. This research provides support for the notion that MCTS is well-suited to a class of domain possessing a smoothness property. In these domains, biased rollouts are more likely to produce strong agents. Conversely, any error due to incorrect bias is compounded in non-smooth domains, and in particular for low-variance simulations. This is demonstrated empirically in a number of single-agent domains.LG201

    Traditional Wisdom and Monte Carlo Tree Search Face-to-Face in the Card Game Scopone

    Get PDF
    We present the design of a competitive artificial intelligence for Scopone, a popular Italian card game. We compare rule-based players using the most established strategies (one for beginners and two for advanced players) against players using Monte Carlo Tree Search (MCTS) and Information Set Monte Carlo Tree Search (ISMCTS) with different reward functions and simulation strategies. MCTS requires complete information about the game state and thus implements a cheating player while ISMCTS can deal with incomplete information and thus implements a fair player. Our results show that, as expected, the cheating MCTS outperforms all the other strategies; ISMCTS is stronger than all the rule-based players implementing well-known and most advanced strategies and it also turns out to be a challenging opponent for human players.Comment: Preprint. Accepted for publication in the IEEE Transaction on Game

    On monte carlo tree search and reinforcement learning

    Get PDF
    Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved widespread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search

    Complexity, Heuristic, and Search Analysis for the Games of Crossings and Epaminondas

    Get PDF
    Games provide fertile research domains for algorithmic research. Often, game research helps solve real-world problems through the testing and refinement of search algorithms in game domains. Other times, game research finds limits for certain algorithms. For example, the game of Go proved intractable for the Min-Max with Alpha-Beta pruning algorithm leading to the popularity of Monte-Carlo based search algorithms. Although effective in Go, and game domains once ruled by Alpha-Beta such as Lines of Action, Monte-Carlo methods appear to have limits too as they fall short in tactical domains such as Hex and Chess. In a continuation of this type of research, two new games, Crossings and Epaminondas, are presented, analyzed and used to test two Monte-Carlo based algorithms: Upper Confidence Bounds applied to Trees (UCT) and Heuristic Guided UCT (HUCT). Results indicate that heuristic knowledge can positively affect UCT\u27s performance in the lower complexity domain of Crossings. However, both agents perform worse in the higher complexity domain of Epaminondas. This identifies Epaminondas as another domain that poses difficulties for Monte Carlo agents

    Exploring search space trees using an adapted version of Monte Carlo tree search for combinatorial optimization problems

    Full text link
    In this article, a novel approach to solve combinatorial optimization problems is proposed. This approach makes use of a heuristic algorithm to explore the search space tree of a problem instance. The algorithm is based on Monte Carlo tree search, a popular algorithm in game playing that is used to explore game trees. By leveraging the combinatorial structure of a problem, several enhancements to the algorithm are proposed. These enhancements aim to efficiently explore the search space tree by pruning subtrees, using a heuristic simulation policy, reducing the domains of variables by eliminating dominated value assignments and using a beam width. They are demonstrated for two specific combinatorial optimization problems: the quay crane scheduling problem with non-crossing constraints and the 0-1 knapsack problem. Computational results show that the algorithm achieves promising results for both problems and eight new best solutions for a benchmark set of instances are found for the former problem. These results indicate that the algorithm is competitive with the state-of-the-art. Apart from this, the results also show evidence that the algorithm is able to learn to correct the incorrect choices made by constructive heuristics

    The 2016 Two-Player GVGAI Competition

    Get PDF
    This paper showcases the setting and results of the first Two-Player General Video Game AI competition, which ran in 2016 at the IEEE World Congress on Computational Intelligence and the IEEE Conference on Computational Intelligence and Games. The challenges for the general game AI agents are expanded in this track from the single-player version, looking at direct player interaction in both competitive and cooperative environments of various types and degrees of difficulty. The focus is on the agents not only handling multiple problems, but also having to account for another intelligent entity in the game, who is expected to work towards their own goals (winning the game). This other player will possibly interact with first agent in a more engaging way than the environment or any non-playing character may do. The top competition entries are analyzed in detail and the performance of all agents is compared across the four sets of games. The results validate the competition system in assessing generality, as well as showing Monte Carlo Tree Search continuing to dominate by winning the overall Championship. However, this approach is closely followed by Rolling Horizon Evolutionary Algorithms, employed by the winner of the second leg of the contest

    Fast Evolutionary Adaptation for Monte Carlo Tree Search

    Get PDF
    This paper describes a new adaptive Monte Carlo Tree Search (MCTS) algorithm that uses evolution to rapidly optimise its performance. An evolutionary algorithm is used as a source of control parameters to modify the behaviour of each iteration (i.e. each simulation or roll-out) of the MCTS algorithm; in this paper we largely restrict this to modifying the behaviour of the random default policy, though it can also be applied to modify the tree policy

    Random Search Algorithms

    Get PDF
    In this project we designed and developed improvements for the random search algorithm UCT with a focus on improving performance with directed acyclic graphs and groupings. We then performed experiments in order to quantify performance gains with both artificial game trees and computer Go. Finally, we analyzed the outcome of the experiments and presented our findings. Overall, this project represents original work in the area of random search algorithms on directed acyclic graphs and provides several opportunities for further research

    Framework for Monte Carlo Tree Search-related strategies in Competitive Card Based Games

    Get PDF
    In recent years, Monte Carlo Tree Search (MCTS) has been successfully applied as a new artificial intelligence strategy in game playing, with excellent results yielded in the popular board game Go, real time strategy games and card games. The MCTS algorithm was developed as an alternative over established adversarial search algorithms, i.e., Minimax (MM) and knowledge-based approaches.MCTS can achieve good results with nothing more than information about the game rules, and can achieve breakthroughs in domains of high complexity, whereas in traditional AI approaches, developers might struggle to find heuristics through expertise in each specific game.Every algorithm has its caveats, and MCTS is no exception, as stated by Browne et al: "Although basic implementations of MCTS provide effective play for some domains, results can be weak if the basic algorithm is not enhanced. (...) There is currently no better way than a manual, empirical study of the effect of enhancements to obtain acceptable performance in a particular domain."Thus, the first objective of this dissertation is to research various state of the art MCTS enhancements in a context of card games and then proceed to apply, experiment and fine tune them in order to achieve a highly competitive implementation, validated and tested against other algorithms such as MM.By analysing trick-taking card games such as Sueca and Bisca, where players take turns placing cards face up in the table, there are similarities that allow the development of a MCTS based implementation that features effective enhancements for multiple game variations, since they are non deterministic imperfect information problems. Good results have been achieved in this domain with the algorithm, in games such as Spades and Hearts.The end result aims toward a framework that offers a competitive AI implementation for at least 3 different card games (achieved with analysis and validation against other approaches), allowing developers to integrate their own card games and benefit from a working AI, and also serving as testing ground to rank different agent implementations
    • …
    corecore