12 research outputs found

    Mastering the game of Go without human knowledge

    Get PDF
    A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo

    Monte-Carlo tree search with heuristic knowledge: A novel way in solving capturing and life and death problems in Go

    Get PDF
    Monte-Carlo (MC) tree search is a new research field. Its effectiveness in searching large state spaces, such as the Go game tree, is well recognized in the computer Go community. Go domain- specific heuristics and techniques as well as domain-independent heuristics and techniques are sys- tematically investigated in the context of the MC tree search in this dissertation. The search extensions based on these heuristics and techniques can significantly improve the effectiveness and efficiency of the MC tree search. Two major areas of investigation are addressed in this dissertation research: I. The identification and use of the effective heuristic knowledge in guiding the MC simulations, II. The extension of the MC tree search algorithm with heuristics. Go, the most challenging board game to the machine, serves as the test bed. The effectiveness of the MC tree search extensions is demonstrated through the performances of Go tactic problem solvers using these techniques. The main contributions of this dissertation include: 1. A heuristics based Monte-Carlo tactic tree search framework is proposed to extend the standard Monte-Carlo tree search. 2. (Go) Knowledge based heuristics are systematically investigated to improve the Monte-Carlo tactic tree search. 3. Pattern learning is demonstrated as effective in improving the Monte-Carlo tactic tree search. 4. Domain knowledge independent tree search enhancements are shown as effective in improving the Monte-Carlo tactic tree search performances. 5. A strong Go Tactic solver based on proposed algorithms outperforms traditional game tree search algorithms. The techniques developed in this dissertation research can benefit other game domains and ap- plication fields

    Evolutionary Artificial Neural Network Weight Tuning to Optimize Decision Making for an Abstract Game

    Get PDF
    Abstract strategy games present a deterministic perfect information environment with which to test the strategic capabilities of artificial intelligence systems. With no unknowns or random elements, only the competitors’ performances impact the results. This thesis takes one such game, Lines of Action, and attempts to develop a competitive heuristic. Due to the complexity of Lines of Action, artificial neural networks are utilized to model the relative values of board states. An application, pLoGANN (Parallel Lines of Action with Genetic Algorithm and Neural Networks), is developed to train the weights of this neural network by implementing a genetic algorithm over a distributed environment. While pLoGANN proved to be designed efficiently, it failed to produce a competitive Lines of Action player, shedding light on the difficulty of developing a neural network to model such a large and complex solution space

    Learning From Geometry In Learning For Tactical And Strategic Decision Domains

    Get PDF
    Artificial neural networks (ANNs) are an abstraction of the low-level architecture of biological brains that are often applied in general problem solving and function approximation. Neuroevolution (NE), i.e. the evolution of ANNs, has proven effective at solving problems in a variety of domains. Information from the domain is input to the ANN, which outputs its desired actions. This dissertation presents a new NE algorithm called Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT), based on a novel indirect encoding of ANNs. The key insight in HyperNEAT is to make the algorithm aware of the geometry in which the ANNs are embedded and thereby exploit such domain geometry to evolve ANNs more effectively. The dissertation focuses on applying HyperNEAT to tactical and strategic decision domains. These domains involve simultaneously considering short-term tactics while also balancing long-term strategies. Board games such as checkers and Go are canonical examples of such domains; however, they also include real-time strategy games and military scenarios. The dissertation details three proposed extensions to HyperNEAT designed to work in tactical and strategic decision domains. The first is an action selector ANN architecture that allows the ANN to indicate its judgements on every possible action all at once. The second technique is called substrate extrapolation. It allows learning basic concepts at a low resolution, and then increasing the resolution to learn more advanced concepts. The iii final extension is geometric game-tree pruning, whereby HyperNEAT can endow the ANN the ability to focus on specific areas of a domain (such as a checkers board) that deserve more inspection. The culminating contribution is to demonstrate the ability of HyperNEAT with these extensions to play Go, a most challenging game for artificial intelligence, by combining HyperNEAT with UC

    Spatial-temporal reasoning applications of computational intelligence in the game of Go and computer networks

    Get PDF
    Spatial-temporal reasoning is the ability to reason with spatial images or information about space over time. In this dissertation, computational intelligence techniques are applied to computer Go and computer network applications. Among four experiments, the first three are related to the game of Go, and the last one concerns the routing problem in computer networks. The first experiment represents the first training of a modified cellular simultaneous recurrent network (CSRN) trained with cellular particle swarm optimization (PSO). Another contribution is the development of a comprehensive theoretical study of a 2x2 Go research platform with a certified 5 dan Go expert. The proposed architecture successfully trains a 2x2 game tree. The contribution of the second experiment is the development of a computational intelligence algorithm calledcollective cooperative learning (CCL). CCL learns the group size of Go stones on a Go board with zero knowledge by communicating only with the immediate neighbors. An analysis determines the lower bound of a design parameter that guarantees a solution. The contribution of the third experiment is the proposal of a unified system architecture for a Go robot. A prototype Go robot is implemented for the first time in the literature. The last experiment tackles a disruption-tolerant routing problem for a network suffering from link disruption. This experiment represents the first time that the disruption-tolerant routing problem has been formulated with a Markov Decision Process. In addition, the packet delivery rate has been improved under a range of link disruption levels via a reinforcement learning approach --Abstract, page iv

    Guiding Monte Carlo Tree Search simulations through Bayesian Opponent Modeling in The Octagon Theory

    Get PDF
    Os jogos de tabuleiro apresentam um problema de tomada de decisão desafiador na área da Inteligência Artificial. Embora abordagens clássicas baseadas em árvores de pesquisa tenham sido aplicadas com sucesso em diversos jogos de tabuleiro, como o Xadrez, estas mesmas abordagens ainda são limitadas pela tecnologia actual quando aplicadas a jogos de tabuleiro de maior omplexidade, como o Go. Face a isto, os jogos de maior complexidade só se tornaram no foco de pesquisa com o aparecimento de árvores de pesquisa baseadas em métodos de Monte Carlo (Monte Carlo Tree Search - MCTS), uma vez que começaram a surgir perspectivas de solução neste domínio.Este projecto de dissertação tem como objectivo expandir o estado de arte actual relativo a MCTS, através da investigação da integração de modelação de oponentes (Opponent Modeling) com MCTS. O propósito desta integração é guiar as simulações de um algoritmo típico de MCTS através da obtenção de conhecimento acerca do adversário, utilizando modelação de oponentes Bayesiana (Bayesian Opponent Modeling), com o intuito de reduzir o número de computações irrelevantes que são executadas em métodos puramente estocásticos e independentes de domínio. Para esta investigação, foi utilizado o jogo de tabuleiro deterministico The Octagon Theory, pois as suas regras, dimensão fixa do problema e configuração do tabuleiro apresentam não só um complexo desafio na criação de modelos de oponentes e na execução de MCTS em si, mas também um meio claro de classificação e comparação (benchmark) entre algoritmos. Através da análise de um estudo efectuado sobre a complexidade do jogo, acredita-se que o jogo, quando jogado na maior versão do tabuleiro, se encontra na mesma classe de complexidade do Shogi e da versão 19x19 do Go, transformando-se num jogo de tabuleiro adequado para investigação nesta área. Ao longo deste relatório, diversas políticas e melhoramentos relativos a MCTS são apresentados e comparados não só com a variação proposta, mas também com o método básico de Monte Carlo e com a melhor abordagem (greedy) conhecida no contexto do The Octagon Theory. Os resultados desta investigação revelam que a adição de Move Groups, Decisive Moves, Upper Confidence Bounds for Trees (UCT), Limited Simulation Lengths e Opponent Modeling transformam um agente MCTS previamente perdedor no melhor agente, num domínio com uma complexidade da árvore de jogo (game-tree complexity) estimada de 10^293, mesmo quando o orçamento computacional atribuído ao agente é mínimo.Board games present a very challenging problem in the decision-making topic of Artificial Intelligence. Although classical tree search approaches have been successful in various board games, such as Chess, these approaches are still very limited by modern technology when applied to higher complexity games such as Go. In light of this, it was not until the appearance of Monte Carlo Tree Search (MCTS) methods that higher complexity games became the main focus of research, as solution perspectives started to appear in this domain.This thesis builds on the current state-of-the-art in MCTS methods, by investigating the integration of Opponent Modeling with MCTS. The goal of this integration is to guide the simulations of the MCTS algorithm according to knowledge about the opponent, obtained in real-time through Bayesian Opponent Modeling, with the intention of reducing the number of irrelevant computations that are performed in purely stochastic, domain-independent methods. For this research, the two player deterministic board game The Octagon Theory was used, as its rules, fixed problem length and board configuration, present not only a difficult challenge for both the creation of opponent models and the execution of the MCTS method itself, but also a clear benchmark for comparison between algorithms. Through the analysis of a performed computation on the gametree complexity, the large board version of the game is believed to be in the same complexity class of Shogi and the 19x19 version of Go, turning it into a suitable board game for research in this area. Throughout this report, several MCTS policies and enhancements are presented and compared with not only the proposed variation, but also standard Monte Carlo search and the best known greedy approach for The Octagon Theory. The experiments reveal that a combination of Move Groups, Decisive Moves, Upper Confidence Bounds for Trees (UCT), Limited Simulation Lengths and an Opponent Modeling based simulation policy turn a former losing MCTS agent into the best performing one in a domain with estimated game-tree complexity of 10^293, even when the provided computational budget is kept low

    Ising Graphical Model

    Get PDF
    The Ising model is an important model in statistical physics, with over 10,000 papers published on the topic. This model assumes binary variables and only local pairwise interactions between neighbouring nodes. Inference for the general Ising model is NP-hard; this includes tasks such as calculating the partition function, finding a lowest-energy (ground) state and computing marginal probabilities. Past approaches have proceeded by working with classes of tractable Ising models, such as Ising models defined on a planar graph. For such models, the partition function and ground state can be computed exactly in polynomial time by establishing a correspondence with perfect matchings in a related graph. In this thesis we continue this line of research. In particular we simplify previous inference algorithms for the planar Ising model. The key to our construction is the complementary correspondence between graph cuts of the model graph and perfect matchings of its expanded dual. We show that our exact algorithms are effective and efficient on a number of real-world machine learning problems. We also investigate heuristic methods for approximating ground states of non-planar Ising models. We show that in this setting our approximative algorithms are superior than current state-of-the-art methods

    The estimation of reward and value in reinforcement learning

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore