675 research outputs found

    Thinking Fast and Slow with Deep Learning and Tree Search

    Get PDF
    Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans. In this paper, we present Expert Iteration (ExIt), a novel reinforcement learning algorithm which decomposes the problem into separate planning and generalisation tasks. Planning new policies is performed by tree search, while a deep neural network generalises those plans. Subsequently, tree search is improved by using the neural network policy to guide search, increasing the strength of new plans. In contrast, standard deep Reinforcement Learning algorithms rely on a neural network not only to generalise plans, but to discover them too. We show that ExIt outperforms REINFORCE for training a neural network to play the board game Hex, and our final tree search agent, trained tabula rasa, defeats MoHex 1.0, the most recent Olympiad Champion player to be publicly released.Comment: v1 to v2: - Add a value function in MCTS - Some MCTS hyper-parameters changed - Repetition of experiments: improved accuracy and errors shown. (note the reduction in effect size for the tpt/cat experiment) - Results from a longer training run, including changes in expert strength in training - Comparison to MoHex. v3: clarify independence of ExIt and AG0. v4: see appendix

    Automated state of play: rethinking anthropocentric rules of the game

    Get PDF
    Automation of play has become an ever more noticeable phenomenon in the domain of video games, expressed by self-playing game worlds, self-acting characters, and non-human agents traversing multiplayer spaces. This article proposes to look at AI-driven non-human play and, what follows, rethink digital games, taking into consideration their cybernetic nature, thus departing from the anthropocentric perspectives dominating the field of Game Studies. A decentralised post-humanist reading, as the author argues, not only allows to rethink digital games and play, but is a necessary condition to critically reflect AI, which due to the fictional character of video games, often plays by very different rules than the so-called “true” AI

    Automation of play:theorizing self-playing games and post-human ludic agents

    Get PDF
    This article offers a critical reflection on automation of play and its significance for the theoretical inquiries into digital games and play. Automation has become an ever more noticeable phenomenon in the domain of video games, expressed by self-playing game worlds, self-acting characters, and non-human agents traversing multiplayer spaces. On the following pages, the author explores various instances of automated non-human play and proposes a post-human theoretical lens, which may help to create a new framework for the understanding of videogames, renegotiate the current theories of interaction prevalent in game studies, and rethink the relationship between human players and digital games

    Improved Reinforcement Learning with Curriculum

    Full text link
    Humans tend to learn complex abstract concepts faster if examples are presented in a structured manner. For instance, when learning how to play a board game, usually one of the first concepts learned is how the game ends, i.e. the actions that lead to a terminal state (win, lose or draw). The advantage of learning end-games first is that once the actions which lead to a terminal state are understood, it becomes possible to incrementally learn the consequences of actions that are further away from a terminal state - we call this an end-game-first curriculum. Currently the state-of-the-art machine learning player for general board games, AlphaZero by Google DeepMind, does not employ a structured training curriculum; instead learning from the entire game at all times. By employing an end-game-first training curriculum to train an AlphaZero inspired player, we empirically show that the rate of learning of an artificial player can be improved during the early stages of training when compared to a player not using a training curriculum.Comment: Draft prior to submission to IEEE Trans on Games. Changed paper slightl
    corecore