3,795 research outputs found

    Embodied Evolution in Collective Robotics: A Review

    Get PDF
    This paper provides an overview of evolutionary robotics techniques applied to on-line distributed evolution for robot collectives -- namely, embodied evolution. It provides a definition of embodied evolution as well as a thorough description of the underlying concepts and mechanisms. The paper also presents a comprehensive summary of research published in the field since its inception (1999-2017), providing various perspectives to identify the major trends. In particular, we identify a shift from considering embodied evolution as a parallel search method within small robot collectives (fewer than 10 robots) to embodied evolution as an on-line distributed learning method for designing collective behaviours in swarm-like collectives. The paper concludes with a discussion of applications and open questions, providing a milestone for past and an inspiration for future research.Comment: 23 pages, 1 figure, 1 tabl

    Fitness Biasing for Evolving an Xpilot Combat Agent

    Get PDF
    In this paper we present an application of Fitness Biasing, a type of Punctuated Anytime Learning, for learning autonomous agents in the space combat game Xpilot. Fitness Biasing was originally developed as a means of linking the model to the actual robot in evolutionary robotics. We use fitness biasing with a standard genetic algorithm to learn control programs for a video game agent in real-time. Xpilot-AI, an Xpilot add-on designed for testing learning systems, is used to evolve the controller in the background while periodic checks in normal game play are used to compensate for errors produced by running the system at a high frame rate. The resultant learned controllers are comparable to our best hand-coded Xpilot-AI bots, display complex behavior that resemble human strategies, and are capable of adapting to a changing enemy in real-time

    Fast Approximate Max-n Monte Carlo Tree Search for Ms Pac-Man

    Get PDF
    We present an application of Monte Carlo tree search (MCTS) for the game of Ms Pac-Man. Contrary to most applications of MCTS to date, Ms Pac-Man requires almost real-time decision making and does not have a natural end state. We approached the problem by performing Monte Carlo tree searches on a five player maxn tree representation of the game with limited tree search depth. We performed a number of experiments using both the MCTS game agents (for pacman and ghosts) and agents used in previous work (for ghosts). Performance-wise, our approach gets excellent scores, outperforming previous non-MCTS opponent approaches to the game by up to two orders of magnitude. © 2011 IEEE

    Evolutionary Machine Learning and Games

    Full text link
    Evolutionary machine learning (EML) has been applied to games in multiple ways, and for multiple different purposes. Importantly, AI research in games is not only about playing games; it is also about generating game content, modeling players, and many other applications. Many of these applications pose interesting problems for EML. We will structure this chapter on EML for games based on whether evolution is used to augment machine learning (ML) or ML is used to augment evolution. For completeness, we also briefly discuss the usage of ML and evolution separately in games.Comment: 27 pages, 5 figures, part of Evolutionary Machine Learning Book (https://link.springer.com/book/10.1007/978-981-99-3814-8

    Evolutionary Robotics

    Get PDF
    info:eu-repo/semantics/publishedVersio
    • …
    corecore