32 research outputs found

    Evolving aggregation behaviors in a swarm of robots

    Get PDF
    In this paper, we study aggregation in a swarm of simple robots, called s-bots, having the capability to self-organize and self-assemble to form a robotic system, called a swarm-bot. The aggregation process, observed in many biological systems, is of fundamental importance since it is the prerequisite for other forms of cooperation that involve self-organization and self-assembling. We consider the problem of designing the control system for the swarm-bot using artificial evolution. The results obtained in a simulated 3D environment are presented and analyzed. They show that artificial evolution, exploiting the complex interactions among s-bots and between s-bots and the environment, is able to produce simple but general solutions to the aggregation problem

    Learning Emergent Behavior in Robot Swarms with NEAT

    Full text link
    When researching robot swarms, many studies observe complex group behavior emerging from the individual agents' simple local actions. However, the task of learning an individual policy to produce a desired emergent behavior remains a challenging and largely unsolved problem. We present a method of training distributed robotic swarm algorithms to produce emergent behavior. Inspired by the biological evolution of emergent behavior in animals, we use an evolutionary algorithm to train a 'population' of individual behaviors to approximate a desired group behavior. We perform experiments using simulations of the Georgia Tech Miniature Autonomous Blimps (GT-MABs) aerial robotics platforms conducted in the CoppeliaSim simulator. Additionally, we test on simulations of Anki Vector robots to display our algorithm's effectiveness on various modes of actuation. We evaluate our algorithm on various tasks where a somewhat complex group behavior is required for success. These tasks include an Area Coverage task, a Surround Target task, and a Wall Climb task. We compare behaviors evolved using our algorithm against 'designed policies', which we create in order to exhibit the emergent behaviors we desire

    Generic Behaviour Similarity Measures for Evolutionary Swarm Robotics

    Full text link
    Novelty search has shown to be a promising approach for the evolution of controllers for swarm robotics. In existing studies, however, the experimenter had to craft a domain dependent behaviour similarity measure to use novelty search in swarm robotics applications. The reliance on hand-crafted similarity measures places an additional burden to the experimenter and introduces a bias in the evolutionary process. In this paper, we propose and compare two task-independent, generic behaviour similarity measures: combined state count and sampled average state. The proposed measures use the values of sensors and effectors recorded for each individual robot of the swarm. The characterisation of the group-level behaviour is then obtained by combining the sensor-effector values from all the robots. We evaluate the proposed measures in an aggregation task and in a resource sharing task. We show that the generic measures match the performance of domain dependent measures in terms of solution quality. Our results indicate that the proposed generic measures operate as effective behaviour similarity measures, and that it is possible to leverage the benefits of novelty search without having to craft domain specific similarity measures.Comment: Initial submission. Final version to appear in GECCO 2013 and dl.acm.or

    Local ant system for allocating robot swarms to time-constrained tasks

    Get PDF
    We propose a novel application of the Ant Colony Optimization algorithm to efficiently allocate a swarm of homogeneous robots to a set of tasks that need to be accomplished by specific deadlines. We exploit the local communication between robots to periodically evaluate the quality of the allocation solutions, and agents select independently among the high-quality alternatives. The evaluation is performed using pheromone trails to favor allocations which minimize the execution time of the tasks. Our approach is validated in both static and dynamic environments (i.e. the task availability changes over time) using different sets of physics-based simulations. (C) 2018 Elsevier B.V. All rights reserved

    Using haptic feedback in human swarm interaction

    Get PDF
    A swarm of robots is a large group of individual agents that autonomously coordinate via local control laws. Their emergent behavior allows simple robots to accomplish complex tasks. Since missions may have complex objectives that change dynamically due to environmental and mission changes, human control and influence over the swarm is needed. The field of Human Swarm Interaction (HSI) is young, with few user studies, and even fewer papers focusing on giving non-visual feedback to the operator. The authors will herein present a background of haptics in robotics and swarms and two studies that explore various conditions under which haptic feedback may be useful in HSI. The overall goal of the studies is to explore the effectiveness of haptic feedback in the presence of other visual stimuli about the swarm system. The findings show that giving feedback about nearby obstacles using a haptic device can improve performance, and that a combination of feedback from obstacle forces via the visual and haptic channels provide the best performance

    Path Planning of Mobile Agents using AI Technique

    Get PDF
    In this paper, we study coordinated motion in a swarm robotic system, called a swarm-bot. A swarm-bot is a self-assembling and self-organizing. Artifact composed of a swarm of s-bots, mobile robots with the ability to connect to and is connect from each other. The swarm-bot concept is particularly suited for tasks that require all-terrain navigation abilities, such as space exploration or rescue in collapsed buildings. As a first step toward the development of more complex control strategies, we investigate the case in which a swarm-bot has to explore an arena while avoiding falling into holes. In such a scenario, individual s-bots have sensory–motor limitations that prevent them navigating efficiently. These limitations can be overcome if the s-bots are made to cooperate. In particular, we exploit the s-bots’ ability to physically connect to each other. In order to synthesize the s-bots’ controller, we rely on artificial evolution, which we show to be a powerful tool for the production of simple and effective solutions to the hole avoidance task
    corecore