1,214 research outputs found

    Optimizing collective fieldtaxis of swarming agents through reinforcement learning

    Full text link
    Swarming of animal groups enthralls scientists in fields ranging from biology to physics to engineering. Complex swarming patterns often arise from simple interactions between individuals to the benefit of the collective whole. The existence and success of swarming, however, nontrivially depend on microscopic parameters governing the interactions. Here we show that a machine-learning technique can be employed to tune these underlying parameters and optimize the resulting performance. As a concrete example, we take an active matter model inspired by schools of golden shiners, which collectively conduct phototaxis. The problem of optimizing the phototaxis capability is then mapped to that of maximizing benefits in a continuum-armed bandit game. The latter problem accepts a simple reinforcement-learning algorithm, which can tune the continuous parameters of the model. This result suggests the utility of machine-learning methodology in swarm-robotics applications.Comment: 6 pages, 3 figure

    GUARDIANS final report

    Get PDF
    Emergencies in industrial warehouses are a major concern for firefghters. The large dimensions together with the development of dense smoke that drastically reduces visibility, represent major challenges. The Guardians robot swarm is designed to assist fire fighters in searching a large warehouse. In this report we discuss the technology developed for a swarm of robots searching and assisting fire fighters. We explain the swarming algorithms which provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also one of the means to locate the robots and humans. Thus the robot swarm is able to locate itself and provide guidance information to the humans. Together with the re ghters we explored how the robot swarm should feed information back to the human fire fighter. We have designed and experimented with interfaces for presenting swarm based information to human beings

    UltraSwarm: A Further Step Towards a Flock of Miniature Helicopters

    Get PDF
    We describe further progress towards the development of a MAV (micro aerial vehicle) designed as an enabling tool to investigate aerial flocking. Our research focuses on the use of low cost off the shelf vehicles and sensors to enable fast prototyping and to reduce development costs. Details on the design of the embedded electronics and the modification of the chosen toy helicopter are presented, and the technique used for state estimation is described. The fusion of inertial data through an unscented Kalman filter is used to estimate the helicopter’s state, and this forms the main input to the control system. Since no detailed dynamic model of the helicopter in use is available, a method is proposed for automated system identification, and for subsequent controller design based on artificial evolution. Preliminary results obtained with a dynamic simulator of a helicopter are reported, along with some encouraging results for tackling the problem of flocking

    Human Swarm Interaction: An Experimental Study of Two Types of Interaction with Foraging Swarms

    Get PDF
    In this paper we present the first study of human-swarm interaction comparing two fundamental types of interaction, coined intermittent and environmental. These types are exemplified by two control methods, selection and beacon control, made available to a human operator to control a foraging swarm of robots. Selection and beacon control differ with respect to their temporal and spatial influence on the swarm and enable an operator to generate different strategies from the basic behaviors of the swarm. Selection control requires an active selection of groups of robots while beacon control exerts an influence on nearby robots within a set range. Both control methods are implemented in a testbed in which operators solve an information foraging problem by utilizing a set of swarm behaviors. The robotic swarm has only local communication and sensing capabilities. The number of robots in the swarm range from 50 to 200. Operator performance for each control method is compared in a series of missions in different environments with no obstacles up to cluttered and structured obstacles. In addition, performance is compared to simple and advanced autonomous swarms. Thirty-two participants were recruited for participation in the study. Autonomous swarm algorithms were tested in repeated simulations. Our results showed that selection control scales better to larger swarms and generally outperforms beacon control. Operators utilized different swarm behaviors with different frequency across control methods, suggesting an adaptation to different strategies induced by choice of control method. Simple autonomous swarms outperformed human operators in open environments, but operators adapted better to complex environments with obstacles. Human controlled swarms fell short of task-specific benchmarks under all conditions. Our results reinforce the importance of understanding and choosing appropriate types of human-swarm interaction when designing swarm systems, in addition to choosing appropriate swarm behaviors

    Foraging swarms as Nash equilibria of dynamic games

    Get PDF
    Cataloged from PDF version of article.The question of whether foraging swarms can form as a result of a noncooperative game played by individuals is shown here to have an affirmative answer. A dynamic game played by N agents in 1-D motion is introduced and models, for instance, a foraging ant colony. Each agent controls its velocity to minimize its total work done in a finite time interval. The game is shown to have a unique Nash equilibrium under two different foraging location specifications, and both equilibria display many features of a foraging swarm behavior observed in biological swarms. Explicit expressions are derived for pairwise distances between individuals of the swarm, swarm size, and swarm center location during foraging. © 2013 IEEE

    Human-guided Swarms: Impedance Control-inspired Influence in Virtual Reality Environments

    Full text link
    Prior works in human-swarm interaction (HSI) have sought to guide swarm behavior towards established objectives, but may be unable to handle specific scenarios that require finer human supervision, variable autonomy, or application to large-scale swarms. In this paper, we present an approach that enables human supervisors to tune the level of swarm control, and guide a large swarm using an assistive control mechanism that does not significantly restrict emergent swarm behaviors. We develop this approach in a virtual reality (VR) environment, using the HTC Vive and Unreal Engine 4 with AirSim plugin. The novel combination of an impedance control-inspired influence mechanism and a VR test bed enables and facilitates the rapid design and test iterations to examine trade-offs between swarming behavior and macroscopic-scale human influence, while circumventing flight duration limitations associated with battery-powered small unmanned aerial system (sUAS) systems. The impedance control-inspired mechanism was tested by a human supervisor to guide a virtual swarm consisting of 16 sUAS agents. Each test involved moving the swarm's center of mass through narrow canyons, which were not feasible for a swarm to traverse autonomously. Results demonstrate that integration of the influence mechanism enabled the successful manipulation of the macro-scale behavior of the swarm towards task completion, while maintaining the innate swarming behavior.Comment: 11 pages, 5 figures, preprin
    corecore