1,654 research outputs found

    Investigating evolutionary checkers by incorporating individual and social learning, N-tuple systems and a round robin tournament

    Get PDF
    In recent years, much research attention has been paid to evolving self-learning game players. Fogel's Blondie24 is just one demonstration of a real success in this field and it has inspired many other scientists. In this thesis, artificial neural networks are employed to evolve game playing strategies for the game of checkers by introducing a league structure into the learning phase of a system based on Blondie24. We believe that this helps eliminate some of the randomness in the evolution. The best player obtained is tested against an evolutionary checkers program based on Blondie24. The results obtained are promising. In addition, we introduce an individual and social learning mechanism into the learning phase of the evolutionary checkers system. The best player obtained is tested against an implementation of an evolutionary checkers program, and also against a player, which utilises a round robin tournament. The results are promising. N-tuple systems are also investigated and are used as position value functions for the game of checkers. The architecture of the n-tuple is utilises temporal difference learning. The best player obtained is compared with an implementation of evolutionary checkers program based on Blondie24, and also against a Blondie24 inspired player, which utilises a round robin tournament. The results are promising. We also address the question of whether piece difference and the look-ahead depth are important factors in the Blondie24 architecture. Our experiments show that piece difference and the look-ahead depth have a significant effect on learning abilities

    Investigating evolutionary checkers by incorporating individual and social learning, N-tuple systems and a round robin tournament

    Get PDF
    In recent years, much research attention has been paid to evolving self-learning game players. Fogel's Blondie24 is just one demonstration of a real success in this field and it has inspired many other scientists. In this thesis, artificial neural networks are employed to evolve game playing strategies for the game of checkers by introducing a league structure into the learning phase of a system based on Blondie24. We believe that this helps eliminate some of the randomness in the evolution. The best player obtained is tested against an evolutionary checkers program based on Blondie24. The results obtained are promising. In addition, we introduce an individual and social learning mechanism into the learning phase of the evolutionary checkers system. The best player obtained is tested against an implementation of an evolutionary checkers program, and also against a player, which utilises a round robin tournament. The results are promising. N-tuple systems are also investigated and are used as position value functions for the game of checkers. The architecture of the n-tuple is utilises temporal difference learning. The best player obtained is compared with an implementation of evolutionary checkers program based on Blondie24, and also against a Blondie24 inspired player, which utilises a round robin tournament. The results are promising. We also address the question of whether piece difference and the look-ahead depth are important factors in the Blondie24 architecture. Our experiments show that piece difference and the look-ahead depth have a significant effect on learning abilities

    Introduction to Machine Learning

    Get PDF
    Non

    PSO-based coevolutionary Game Learning

    Get PDF
    Games have been investigated as computationally complex problems since the inception of artificial intelligence in the 1950’s. Originally, search-based techniques were applied to create a competent (and sometimes even expert) game player. The search-based techniques, such as game trees, made use of human-defined knowledge to evaluate the current game state and recommend the best move to make next. Recent research has shown that neural networks can be evolved as game state evaluators, thereby removing the human intelligence factor completely. This study builds on the initial research that made use of evolutionary programming to evolve neural networks in the game learning domain. Particle Swarm Optimisation (PSO) is applied inside a coevolutionary training environment to evolve the weights of the neural network. The training technique is applied to both the zero sum and non-zero sum game domains, with specific application to Tic-Tac-Toe, Checkers and the Iterated Prisoners Dilemma (IPD). The influence of the various PSO parameters on playing performance are experimentally examined, and the overall performance of three different neighbourhood information sharing structures compared. A new coevolutionary scoring scheme and particle dispersement operator are defined, inspired by Formula One Grand Prix racing. Finally, the PSO is applied in three novel ways to evolve strategies for the IPD – the first application of its kind in the PSO field. The PSO-based coevolutionary learning technique described and examined in this study shows promise in evolving intelligent evaluators for the aforementioned games, and further study will be conducted to analyse its scalability to larger search spaces and games of varying complexity.Dissertation (MSc)--University of Pretoria, 2005.Computer Scienceunrestricte

    Symbolic versus sub-symbolic approaches: a case study on training Deep Networks to play Nine Men’s Morris game

    Get PDF
    Le reti neurali artificiali, grazie alle nuove tecniche di Deep Learning, hanno completamente rivoluzionato il panorama tecnologico degli ultimi anni, dimostrandosi efficaci in svariati compiti di Intelligenza Artificiale e ambiti affini. Sarebbe quindi interessante analizzare in che modo e in quale misura le deep network possano sostituire le IA simboliche. Dopo gli impressionanti risultati ottenuti nel gioco del Go, come caso di studio è stato scelto il gioco del Mulino, un gioco da tavolo largamente diffuso e ampiamente studiato. È stato quindi creato il sistema completamente sub-simbolico Neural Nine Men’s Morris, che sfrutta tre reti neurali per scegliere la mossa migliore. Le reti sono state addestrate su un dataset di più di 1.500.000 coppie (stato del gioco, mossa migliore), creato in base alle scelte di una IA simbolica. Il sistema ha dimostrato di aver imparato le regole del gioco proponendo una mossa valida in più del 99% dei casi di test. Inoltre ha raggiunto un’accuratezza del 39% rispetto al dataset e ha sviluppato una propria strategia di gioco diversa da quella della IA addestratrice, dimostrandosi un giocatore peggiore o migliore a seconda dell’avversario. I risultati ottenuti in questo caso di studio mostrano che, in questo contesto, la chiave del successo nella progettazione di sistemi AI allo stato dell’arte sembra essere un buon bilanciamento tra tecniche simboliche e sub-simboliche, dando più rilevanza a queste ultime, con lo scopo di raggiungere la perfetta integrazione di queste tecnologie

    Artificial and Computational Intelligence in Games (Dagstuhl Seminar 12191)

    Get PDF
    This report documents the program and the outcomes of Dagstuhl Seminar 12191 "Artificial and Computational Intelligence in Games". The aim for the seminar was to bring together creative experts in an intensive meeting with the common goals of gaining a deeper understanding of various aspects of artificial and computational intelligence in games, to help identify the main challenges in game AI research and the most promising venues to deal with them. This was accomplished mainly by means of workgroups on 14 different topics (ranging from search, learning, and modeling to architectures, narratives, and evaluation), and plenary discussions on the results of the workgroups. This report presents the conclusions that each of the workgroups reached. We also added short descriptions of the few talks that were unrelated to any of the workgroups
    • …
    corecore