1,507 research outputs found

    Aggregate Selection in Evolutionary Robotics

    Full text link
    Can the processes of natural evolution be mimicked to create robots or autonomous agents? This question embodies the most fundamental goals of evolutionary robotics (ER). ER is a field of research that explores the use of artificial evolution and evolutionary computing for learning of control in autonomous robots, and in autonomous agents in general. In a typical ER experiment, robots, or more precisely their control systems, are evolved to perform a given task in which they must interact dynamically with their environment. Controllers compete in the environment and are selected and propagated based on their ability (or fitness) to perform the desired task. A key component of this process is the manner in which the fitness of the evolving controllers is measured. In ER, fitness is measured by a fitness function or objective function. This function applies some given criteria to determine which robots or agents are better at performing the task for which they are being evolved. Fitness functions can introduce varying levels of a priori knowledge into evolving populations. Som

    Evolutionary Robotics

    Get PDF
    info:eu-repo/semantics/publishedVersio

    Evolution of Neuro-Controllers for Robots\u27 Alignment using Local Communication

    Get PDF
    In this paper, we use artificial evolution to design homogeneous neural network controller for groups of robots required to align. Aligning refers to the process by which the robots managed to head towards a common arbitrary and autonomously chosen direction starting from initial randomly chosen orientations. The cooperative interactions among robots require local communications that are physically implemented using infrared signalling. We study the performance of the evolved controllers, both in simulation and in reality for different group sizes. In addition, we analyze the most successful communication strategy developed using artificial evolution

    Neuroevolution in Games: State of the Art and Open Challenges

    Get PDF
    This paper surveys research on applying neuroevolution (NE) to games. In neuroevolution, artificial neural networks are trained through evolutionary algorithms, taking inspiration from the way biological brains evolved. We analyse the application of NE in games along five different axes, which are the role NE is chosen to play in a game, the different types of neural networks used, the way these networks are evolved, how the fitness is determined and what type of input the network receives. The article also highlights important open research challenges in the field.Comment: - Added more references - Corrected typos - Added an overview table (Table 1

    Abstracting Multidimensional Concepts for Multilevel Decision Making in Multirobot Systems

    Get PDF
    Multirobot control architectures often require robotic tasks to be well defined before allocation. In complex missions, it is often difficult to decompose an objective into a set of well defined tasks; human operators generate a simplified representation based on experience and estimation. The result is a set of robot roles, which are not best suited to accomplishing those objectives. This thesis presents an alternative approach to generating multirobot control algorithms using task abstraction. By carefully analysing data recorded from similar systems a multidimensional and multilevel representation of the mission can be abstracted, which can be subsequently converted into a robotic controller. This work, which focuses on the control of a team of robots to play the complex game of football, is divided into three sections: In the first section we investigate the use of spatial structures in team games. Experimental results show that cooperative teams beat groups of individuals when competing for space and that controlling space is important in the game of robot football. In the second section, we generate a multilevel representation of robot football based on spatial structures measured in recorded matches. By differentiating between spatial configurations appearing in desirable and undesirable situations, we can abstract a strategy composed of the more desirable structures. In the third section, five partial strategies are generated, based on the abstracted structures, and a suitable controller is devised. A set of experiments shows the success of the method in reproducing those key structures in a multirobot system. Finally, we compile our methods into a formal architecture for task abstraction and control. The thesis concludes that generating multirobot control algorithms using task abstraction is appropriate for problems which are complex, weakly-defined, multilevel, dynamic, competitive, unpredictable, and which display emergent properties

    Neuro-Evolution for Emergent Specialization in Collective Behavior Systems

    Get PDF
    Eiben, A.E. [Promotor]Schut, M.C. [Copromotor
    • …
    corecore