1,741 research outputs found

    A Survey of Monte Carlo Tree Search Methods

    Get PDF
    Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work

    Spatial-temporal reasoning applications of computational intelligence in the game of Go and computer networks

    Get PDF
    Spatial-temporal reasoning is the ability to reason with spatial images or information about space over time. In this dissertation, computational intelligence techniques are applied to computer Go and computer network applications. Among four experiments, the first three are related to the game of Go, and the last one concerns the routing problem in computer networks. The first experiment represents the first training of a modified cellular simultaneous recurrent network (CSRN) trained with cellular particle swarm optimization (PSO). Another contribution is the development of a comprehensive theoretical study of a 2x2 Go research platform with a certified 5 dan Go expert. The proposed architecture successfully trains a 2x2 game tree. The contribution of the second experiment is the development of a computational intelligence algorithm calledcollective cooperative learning (CCL). CCL learns the group size of Go stones on a Go board with zero knowledge by communicating only with the immediate neighbors. An analysis determines the lower bound of a design parameter that guarantees a solution. The contribution of the third experiment is the proposal of a unified system architecture for a Go robot. A prototype Go robot is implemented for the first time in the literature. The last experiment tackles a disruption-tolerant routing problem for a network suffering from link disruption. This experiment represents the first time that the disruption-tolerant routing problem has been formulated with a Markov Decision Process. In addition, the packet delivery rate has been improved under a range of link disruption levels via a reinforcement learning approach --Abstract, page iv

    Templates in chess memory: A mechanism for recalling several boards

    Get PDF
    This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973): To what extent is skilled chess memory limited by the size of short-term memory (about 7 chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to 5 positions presented during 5 seconds each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in STM, to store information rapidly

    Expert memory: A comparison of four theories

    Get PDF
    This paper compares four current theories of expertise with respect to chess playersā€™ memory: Chase and Simonā€™s (1973) chunking theory, Holdingā€™s (1985) SEEK theory, Ericsson and Kintschā€™s (1995) long-term working memory theory, and Gobet and Simonā€™s (1996b) template theory. The empirical areas showing the largest discriminative power include recall of random and distorted positions, recall with very short presentation times, and interference studies. Contrary to recurrent criticisms in the literature, it is shown that the chunking theory is consistent with most of the data. However, the best performance in accounting for the empirical evidence is obtained by the template theory. The theory, which unifies low-level aspects of cognition, such as chunks, with high-level aspects, such as schematic knowledge and planning, proposes that chunks are accessed through a discrimination net, where simple perceptual features are tested, and that they can evolve into more complex data structures (templates) specific to classes of positions. Implications for the study of expertise in general include the need for detailed process models of expert behavior and the need to use empirical data spanning the traditional boundaries of perception, memory, and problem solving

    Why Philosophers Should Care About Computational Complexity

    Get PDF
    One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction, Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and beyond," MIT Press, 2012. Some minor clarifications and corrections; new references adde

    Extensible graphical game generator

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.Vita.Includes bibliographical references (leaves 162-167).An ontology of games was developed, and the similarities between games were analyzed and codified into reusable software components in a system called EGGG, the Extensible Graphical Game Generator. By exploiting the similarities between games, EGGG makes it possible for someone to create a fully functional computer game with a minimum of programming effort. The thesis behind the dissertation is that there exist sufficient commonalities between games that such a software system can be constructed. In plain English, the thesis is that games are really a lot more alike than most people imagine, and that these similarities can be used to create a generic game engine: you tell it the rules of your game, and the engine renders it into an actual computer game that everyone can play.by Jon Orwant.Ph.D

    Random Search Algorithms

    Get PDF
    In this project we designed and developed improvements for the random search algorithm UCT with a focus on improving performance with directed acyclic graphs and groupings. We then performed experiments in order to quantify performance gains with both artificial game trees and computer Go. Finally, we analyzed the outcome of the experiments and presented our findings. Overall, this project represents original work in the area of random search algorithms on directed acyclic graphs and provides several opportunities for further research

    Artificial Intelligence and Cyber Power from a Strategic Perspective

    Get PDF
    Artificial intelligence can outperform humans at narrowly defined tasks and will enable a new generation of autonomous weapon systems. Cyberspace will play a crucial role in future conflicts due to the integration of digital infrastructure in society and the expected prevalence of autonomous systems on the battlefield. AI cyber weapons create a dangerous class of persistent threats that can actively and quickly adjust tactics as they relentlessly and independently probe and attack networks
    • ā€¦
    corecore