42,695 research outputs found

    Embodied Evolution in Collective Robotics: A Review

    Get PDF
    This paper provides an overview of evolutionary robotics techniques applied to on-line distributed evolution for robot collectives -- namely, embodied evolution. It provides a definition of embodied evolution as well as a thorough description of the underlying concepts and mechanisms. The paper also presents a comprehensive summary of research published in the field since its inception (1999-2017), providing various perspectives to identify the major trends. In particular, we identify a shift from considering embodied evolution as a parallel search method within small robot collectives (fewer than 10 robots) to embodied evolution as an on-line distributed learning method for designing collective behaviours in swarm-like collectives. The paper concludes with a discussion of applications and open questions, providing a milestone for past and an inspiration for future research.Comment: 23 pages, 1 figure, 1 tabl

    Approximating n-player behavioural strategy nash equilibria using coevolution

    Get PDF
    Coevolutionary algorithms are plagued with a set of problems related to intransitivity that make it questionable what the end product of a coevolutionary run can achieve. With the introduction of solution concepts into coevolution, part of the issue was alleviated, however efficiently representing and achieving game theoretic solution concepts is still not a trivial task. In this paper we propose a coevolutionary algorithm that approximates behavioural strategy Nash equilibria in n-player zero sum games, by exploiting the minimax solution concept. In order to support our case we provide a set of experiments in both games of known and unknown equilibria. In the case of known equilibria, we can confirm our algorithm converges to the known solution, while in the case of unknown equilibria we can see a steady progress towards Nash. Copyright 2011 ACM

    Active causation and the origin of meaning

    Get PDF
    Purpose and meaning are necessary concepts for understanding mind and culture, but appear to be absent from the physical world and are not part of the explanatory framework of the natural sciences. Understanding how meaning (in the broad sense of the term) could arise from a physical world has proven to be a tough problem. The basic scheme of Darwinian evolution produces adaptations that only represent apparent ("as if") goals and meaning. Here I use evolutionary models to show that a slight, evolvable extension of the basic scheme is sufficient to produce genuine goals. The extension, targeted modulation of mutation rate, is known to be generally present in biological cells, and gives rise to two phenomena that are absent from the non-living world: intrinsic meaning and the ability to initiate goal-directed chains of causation (active causation). The extended scheme accomplishes this by utilizing randomness modulated by a feedback loop that is itself regulated by evolutionary pressure. The mechanism can be extended to behavioural variability as well, and thus shows how freedom of behaviour is possible. A further extension to communication suggests that the active exchange of intrinsic meaning between organisms may be the origin of consciousness, which in combination with active causation can provide a physical basis for the phenomenon of free will.Comment: revised and extende

    Adaptation to criticality through organizational invariance in embodied agents

    Get PDF
    Many biological and cognitive systems do not operate deep within one or other regime of activity. Instead, they are poised at critical points located at phase transitions in their parameter space. The pervasiveness of criticality suggests that there may be general principles inducing this behaviour, yet there is no well-founded theory for understanding how criticality is generated at a wide span of levels and contexts. In order to explore how criticality might emerge from general adaptive mechanisms, we propose a simple learning rule that maintains an internal organizational structure from a specific family of systems at criticality. We implement the mechanism in artificial embodied agents controlled by a neural network maintaining a correlation structure randomly sampled from an Ising model at critical temperature. Agents are evaluated in two classical reinforcement learning scenarios: the Mountain Car and the Acrobot double pendulum. In both cases the neural controller appears to reach a point of criticality, which coincides with a transition point between two regimes of the agent's behaviour. These results suggest that adaptation to criticality could be used as a general adaptive mechanism in some circumstances, providing an alternative explanation for the pervasive presence of criticality in biological and cognitive systems.Comment: arXiv admin note: substantial text overlap with arXiv:1704.0525

    Annotated Bibliography: Anticipation

    Get PDF
    • …
    corecore