18 research outputs found

    MCTS-minimax hybrids with state evaluations

    Get PDF
    Monte-Carlo Tree Search (MCTS) has been found to show weaker play than minimax-based search in some tactical game domains. In order to combine the tactical strength of minimax and the strategic strength of MCTS, MCTS-minimax hybrids have been proposed in prior work. This arti

    Monte-Carlo tree search enhancements for one-player and two-player domains

    Get PDF

    Класифікація способів покращення пошуку по дереву методом Монте-Карло, орієнтованих на особливості цього методу

    No full text
    У статті, на основі інформації з різних джерел про пошук по дереву методом Монте-Карло (MCTS), пропонується уточнена структура класифікації та перша версія класифікації способів покращення базової реалізації методу MCTS. У цій версії, на даний момент, розглянуті тільки суто теоретичні способи покращення етапів загальної схеми MCTS, які орієнтовані на особливості роботи цього методу. Передбачається, що запропонована класифікація може бути в подальшому розширена і використана для систематизації знань про метод MCTS та виявлення нових можливостей його покращення.In the article basing on information taken from various sources about Monte-Carlo tree search (MCTS) method, the updated structure of classification and the first version of just the classification of improvement techniques of the basic MCTS method implementation are proposed. For the moment, this version of the classification discusses only pure theoretical techniques for improving of steps of the general MCTS schema, which are oriented to specifics of the method. It is supposed that the proposed classification can be used for systematization of knowledge about MCTS method and discovering of new possibilities for its improvement

    Action Guidance with MCTS for Deep Reinforcement Learning

    Full text link
    Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.Comment: AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE'19). arXiv admin note: substantial text overlap with arXiv:1904.05759, arXiv:1812.0004

    Hybrid Minimax-MCTS and Difficulty Adjustment for General Game Playing

    Full text link
    Board games are a great source of entertainment for all ages, as they create a competitive and engaging environment, as well as stimulating learning and strategic thinking. It is common for digital versions of board games, as any other type of digital games, to offer the option to select the difficulty of the game. This is usually done by customizing the search parameters of the AI algorithm. However, this approach cannot be extended to General Game Playing agents, as different games might require different parametrization for each difficulty level. In this paper, we present a general approach to implement an artificial intelligence opponent with difficulty levels for zero-sum games, together with a propose of a Minimax-MCTS hybrid algorithm, which combines the minimax search process with GGP aspects of MCTS. This approach was tested in our mobile application LoBoGames, an extensible board games platform, that is intended to have an broad catalog of games, with an emphasis on accessibility: the platform is friendly to visually-impaired users, and is compatible with more than 92\% of Android devices. The tests in this work indicate that both the hybrid Minimax-MCTS and the new difficulty adjustment system are promising GGP approaches that could be expanded in future work

    Enhancements for Real-Time Monte-Carlo Tree Search in General Video Game Playing

    Get PDF
    General Video Game Playing (GVGP) is a field of Artificial Intelligence where agents play a variety of real-time video games that are unknown in advance. This limits the use of domain-specific heuristics. Monte-Carlo Tree Search (MCTS) is a search technique for game playing that does not rely on domain-specific knowledge. This paper discusses eight enhancements for MCTS in GVGP; Progressive History, N-Gram Selection Technique, Tree Reuse, Breadth-First Tree Initialization, Loss Avoidance, Novelty-Based Pruning, Knowledge-Based Evaluations, and Deterministic Game Detection. Some of these are known from existing literature, and are either extended or introduced in the context of GVGP, and some are novel enhancements for MCTS. Most enhancements are shown to provide statistically significant increases in win percentages when applied individually. When combined, they increase the average win percentage over sixty different games from 31.0% to 48.4% in comparison to a vanilla MCTS implementation, approaching a level that is competitive with the best agents of the GVG-AI competition in 2015

    Application of the Monte-Carlo Tree Search to Multi-Action Turn-Based Games with Hidden Information

    Get PDF
    Traditional search algorithms struggle when applied to complex multi-action turn-based games. The introduction of hidden information further increases domain complexity. The Monte-Carlo Tree Search (MCTS) algorithm has previously been applied to multi-action turn-based games, but not multi-action turn-based games with hidden information. This thesis compares several Monte Carlo Tree Search (MCTS) extensions (Determinized/Perfect Information Monte Carlo, Multi-Observer Information Set MCTS, and Belief State MCTS) in TUBSTAP, an open-source multi-action turn-based game, modified to include hidden information via fog-of-war

    The phenomenon of Decision Oscillation: a new consequence of pathology in Game Trees

    Get PDF
    Random minimaxing studies the consequences of using a random number for scoring the leaf nodes of a full width game tree and then computing the best move using the standard minimax procedure. Experiments in Chess showed that the strength of play increases as the depth of the lookahead is increased. Previous research by the authors provided a partial explanation of why random minimaxing can strengthen play by showing that, when one move dominates another move, then the dominating move is more likely to be chosen by minimax. This paper examines a special case of determining the move probability when domination does not occur. Specifically, we show that, under a uniform branching game tree model, whether the probability that one move is chosen rather than another depends not only on the branching factors of the moves involved, but also on whether the number of ply searched is odd or even. This is a new type of game tree pathology, where the minimax procedure will change its mind as to which move is best, independently of the true value of the game, and oscillate between moves as the depth of lookahead alternates between odd and even

    Apprendre à jouer aux jeux à deux joueurs à information parfaite sans connaissance

    Get PDF
    International audienceIn this paper, several techniques for learning game states evaluation functions by reinforcement are proposed. The first is to learn the values of the game tree instead of restricting oneself to the value of the root. The second is to replace the classic gain of a game (+1 / −1) with a heuris-tic favoring quick wins and slow defeats. The third corrects some evaluation functions taking into account the resolution of states. The fourth is a new action selection distribution. Finally, the fifth is a modification of the minimax with unbounded depth extending the best sequences of actions to the terminal states. In addition, we propose another variant of the unbounded minimax, which plays the safest action instead of playing the best action. The experiments conducted suggest that this improves the level of play during confrontations. Finally, we apply these different techniques to design a program-player to the Hex game (size 11) reaching the level of Mohex 2.0 with reinforcement learning from self-play without knowledge.Dans cet article, plusieurs techniques pour l'apprentissage par renforcement de fonctions d'évaluation d'états de jeu sont proposées. La première consiste à apprendre les va-leurs de l'arbre de jeu au lieu de se restreindre à la va-leur de la racine. La seconde consiste à remplacer le gain classique d'un jeu (+1 / −1) par une heuristique favo-risant les victoires rapides et les défaites lentes. La troi-sième permet de corriger certaines fonctions d'évaluation en tenant compte de la résolution des états. La quatrième est une nouvelle distribution de sélection d'actions. Enfin, la cinquième est une modification du minimax à profon-deur non bornée étendant les meilleures séquences d'ac-tions jusqu'aux états terminaux. En outre, nous proposons une autre variante du minimax non borné, qui joue l'ac-tion la plus sure au lieu de jouer la meilleure action. Les expériences menées suggèrent que cela améliore le niveau de jeux lors des confrontations. Enfin, nous appliquons ces différentes techniques pour concevoir un programme-joueur au jeu de Hex (taille 11) atteignant le niveau de Mohex 2.0 à la suite d'un apprentissage par renforcement contre soi-même sans utilisation de connaissance
    corecore