25 research outputs found

    Exploring algorithms to recognize similar board states in Arimaa

    Get PDF
    The game of Arimaa was invented as a challenge to the field of game-playing artificial intelligence, which had grown somewhat haughty after IBM\u27s supercomputer Deep Blue trounced world champion Kasparov at chess. Although Arimaa is simple enough for a child to learn and can be played with an ordinary chess set, existing game-playing algorithms and techniques have had a difficult time rising up to the challenge of defeating the world\u27s best human Arimaa players, mainly due to the game\u27s impressive branching factor. This thesis introduces and analyzes new algorithms and techniques that attempt to recognize similar board states based on relative piece strength in a concentrated area of the board. Using this data, game-playing programs would be able to recognize patterns in order to discern tactics and moves that could lead to victory or defeat in similar situations based on prior experience

    Artificial Intelligence for Game Playing

    Get PDF
    Arimaa je strategická desková hra pro dva hráče. Byla navržena tak, aby byla jednoduchá pro živé hráče, ale složitá pro počítače. Cílem této diplomové práce je návrh a tvorba programu s prvky umělé inteligence, který by v Arimě dokázal porazit živého hráče. Návrh aplikace probíhal zejména ve třech klíčových částech: hodnocení rozestavení, generování tahů a prohledávání. Program byl spuštěn na herním serveru, kde dokázal porazit řadu botů i živých hráčů.Arimaa is a strategic board game for two players. It was designed to be simple for human players and difficult for computers. The aim of this thesis is to design and implement the program with features of the artificial intelligence, which would be able to defeat human players. The implementation was realized in the three key parts: evaluation position, generation of moves and search. The program was run on the game server and defeated many bots as well as human players.

    Arimaa

    Get PDF
    Arimaa je desková hra s jednoduchými pravidly. Je jednoduchá pro lidi, ale zároveň složitá pro počítače.Cílem této bakalářské práce je seznámit se s metodami, které se využívají při hraní her s využitím umělé inteligence. Dále navrhnout a vytvořit program, který bude schopný hrát Arimu proti jiným hráčům a programům. Návrh programu sestává hlavně z generování tahů, prohledávání tahů a ohodnocování herních pozic.Program byl nakonec testovaný na herním servru, kde hrál proti ostatním programům.Arimaa is a board game with simple rules. It is simple for human but at the same time difficult for computers. The aim of this bachelor thesis is to familiarize with game playing methods with features of the artificial intelligence. The next aim is to design and implement the program, that would be able to play against others players and programs. The most important features of the program are move generation, search and evaluation of positions. At the end, the program was tested on the game server, where played against others programs.

    A Survey of Monte Carlo Tree Search Methods

    Get PDF
    Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work

    Washington University Record, July 23, 2004

    Get PDF
    https://digitalcommons.wustl.edu/record/2006/thumbnail.jp

    Playing Multi-Action Adversarial Games: Online Evolutionary Planning versus Tree Search

    Get PDF
    We address the problem of playing turn-based multi-action adversarial games, which include many strategy games with extremely high branching factors as players take multiple actions each turn. This leads to the breakdown of standard tree search methods, including Monte Carlo Tree Search (MCTS), as they become unable to reach a sufficient depth in the game tree. In this paper we introduce Online Evolutionary Planning (OEP) to address this challenge, which searches for combinations of actions to perform during a single turn guided by a fitness function that evaluates the quality of a particular state. We compare OEP to different MCTS variations that constrain the exploration to deal with the high branching factor in the turn-based multi-action game Hero Academy. While the constrained MCTS variations outperform the vanilla MCTS implementation by a large margin, OEP is able to search the space of plans more efficiently than any of the tested tree search methods as it has a relative advantage when the number of actions per turn increases

    Using particle swarm optimization to evolve two-player game agents

    Get PDF
    Computer game-playing agents are almost as old as computers themselves, and people have been developing agents since the 1950's. Unfortunately the techniques for game-playing agents have remained basically the same for almost half a century -- an eternity in computer time. Recently developed approaches have shown that it is possible to develop game playing agents with the help of learning algorithms. This study is based on the concept of algorithms that learn how to play board games from zero initial knowledge about playing strategies. A coevolutionary approach, where a neural network is used to assess desirability of leaf nodes in a game tree, and evolutionary algorithms are used to train neural networks in competition, is overviewed. This thesis then presents an alternative approach in which particle swarm optimization (PSO) is used to train the neural networks. Different variations of the PSO are implemented and compared. The results of the PSO approaches are also compared with that of an evolutionary programming approach. The performance of the PSO algorithms is investigated for different values of the PSO control parameters. This study shows that the PSO approach can be applied successfully to train game-playing agents.Dissertation (MSc)--University of Pretoria, 2007.Computer ScienceUnrestricte

    The Daily Egyptian, May 09, 1974

    Get PDF

    The Daily Egyptian, May 09, 1974

    Get PDF

    On monte carlo tree search and reinforcement learning

    Get PDF
    Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved widespread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search
    corecore