15 research outputs found

    Fast Approximate Max-n Monte Carlo Tree Search for Ms Pac-Man

    Get PDF
    We present an application of Monte Carlo tree search (MCTS) for the game of Ms Pac-Man. Contrary to most applications of MCTS to date, Ms Pac-Man requires almost real-time decision making and does not have a natural end state. We approached the problem by performing Monte Carlo tree searches on a five player maxn tree representation of the game with limited tree search depth. We performed a number of experiments using both the MCTS game agents (for pacman and ghosts) and agents used in previous work (for ghosts). Performance-wise, our approach gets excellent scores, outperforming previous non-MCTS opponent approaches to the game by up to two orders of magnitude. © 2011 IEEE

    Machine as One Player in Indian Cowry Board Game: Strategies and Analysis of Randomness Model for Playing

    Get PDF
    Cowry game is an ancient board game originated in India. It is a game of chance and strategy with the objective of moving players? pieces through a specified path into a final location, according to the roll of special dice (cowry shells). This game involves decision-making under uncertainty and fuzziness with more than two parties; hence it can serve as an excellent example to apply methods and concepts for automating resource management and real-time strategic decisions. This research is aimed at evaluating the complexity of Cowry game and proposing heuristics and strategies that could be the basis of an adaptive artificial player to maximize its chances of winning the game. The main objective for considering machine as one player in Cowry game is to automate different strategies and to develop a machine player which is capable of real-time decision-making under interaction with live opponents. In this paper, we formulate several playing strategies and provide theoretical measures for comparison of these strategies. However, the main focus of this work is on analysis of playing randomly which involves no decision-making or intelligence. By applying this approach, we entirely concentrate on designing the game interface and validating the correctness of our implementation. Furthermore, the enhanced knowledge base resulting from analyzing the performance of the random strategy can be used for understanding the scenarios to be taken care of while evolving other types of strategies

    Novel AI strategies for Multi-Player games at intermediate board states

    Get PDF
    This paper considers the problem of designing efficient AI strategies for playing games at intermediate board states. While general heuristic-based methods are applicable for all boards states, the search required in an alpha-beta scheme depends heavily on the move ordering. Determining the best move ordering to be used in the search is particularly interesting and complex in an intermediate board state, compared to the situation where the game starts with an initial board state, as we do not assume the availability of “Opening book” moves. Furthermore, unlike the two-player scenario that is traditionally analyzed, we investigate the more complex scenario when the game is a multi-player game, like Chinese Checkers. One recent approach, the Best-Reply Search (BRS), resolves this by a process of grouping opponents, which although successful, incurs a very large branching factor. To address this, the authors of this work earlier proposed the Threat-ADS move ordering heuristic, by augmenting the BRS by invoking techniques from the field of Adaptive Data Structures (ADSs) to order the moves. Indeed, the Threat-ADS performs well under a variety of parameters when the game was analyzed at or near the game’s initial state. This work demonstrates that the Threat-ADS also serves as a solution to the unresolved question of finding a viable solution in the far-more variable, intermediate game states. Our present results confirm that the Threat-ADS performs well in these intermediate states for various games. Surprisingly, it, in fact, performs better in some cases, when compared to the start of the game

    Opponent-Pruning Paranoid Search

    Get PDF
    This paper proposes a new search algorithm for fully observable, deterministic multiplayer games: Opponent-Pruning Paranoid Search (OPPS). OPPS is a generalization of a state-of-the-art technique for this class of games, Best-Reply Search (BRS+). Just like BRS+, it allows for Alpha-Beta style pruning through the paranoid assumption, and both deepens the tree and reduces the pessimism of the paranoid assumption through pruning of opponent moves. However, it introduces

    Guiding multiplayer MCTS by focusing on yourself

    Get PDF
    In n-player sequential move games, the second root-player move appears at tree depth n + 1. Depending on n and time, tree search techniques can struggle to expand the game tree deeply enough to find multiple-move plans of the root player, which is often more important for strategic play than considering every possible opponent move in between. The minimax-based Paranoid search and BRS+ algorithms currently achieve state-of-the-art performance, especially at short time settings, by using a generally incorrect opponent model.
    corecore