1,308 research outputs found

    Human Swarm Interaction: An Experimental Study of Two Types of Interaction with Foraging Swarms

    Get PDF
    In this paper we present the first study of human-swarm interaction comparing two fundamental types of interaction, coined intermittent and environmental. These types are exemplified by two control methods, selection and beacon control, made available to a human operator to control a foraging swarm of robots. Selection and beacon control differ with respect to their temporal and spatial influence on the swarm and enable an operator to generate different strategies from the basic behaviors of the swarm. Selection control requires an active selection of groups of robots while beacon control exerts an influence on nearby robots within a set range. Both control methods are implemented in a testbed in which operators solve an information foraging problem by utilizing a set of swarm behaviors. The robotic swarm has only local communication and sensing capabilities. The number of robots in the swarm range from 50 to 200. Operator performance for each control method is compared in a series of missions in different environments with no obstacles up to cluttered and structured obstacles. In addition, performance is compared to simple and advanced autonomous swarms. Thirty-two participants were recruited for participation in the study. Autonomous swarm algorithms were tested in repeated simulations. Our results showed that selection control scales better to larger swarms and generally outperforms beacon control. Operators utilized different swarm behaviors with different frequency across control methods, suggesting an adaptation to different strategies induced by choice of control method. Simple autonomous swarms outperformed human operators in open environments, but operators adapted better to complex environments with obstacles. Human controlled swarms fell short of task-specific benchmarks under all conditions. Our results reinforce the importance of understanding and choosing appropriate types of human-swarm interaction when designing swarm systems, in addition to choosing appropriate swarm behaviors

    Robotic Wireless Sensor Networks

    Full text link
    In this chapter, we present a literature survey of an emerging, cutting-edge, and multi-disciplinary field of research at the intersection of Robotics and Wireless Sensor Networks (WSN) which we refer to as Robotic Wireless Sensor Networks (RWSN). We define a RWSN as an autonomous networked multi-robot system that aims to achieve certain sensing goals while meeting and maintaining certain communication performance requirements, through cooperative control, learning and adaptation. While both of the component areas, i.e., Robotics and WSN, are very well-known and well-explored, there exist a whole set of new opportunities and research directions at the intersection of these two fields which are relatively or even completely unexplored. One such example would be the use of a set of robotic routers to set up a temporary communication path between a sender and a receiver that uses the controlled mobility to the advantage of packet routing. We find that there exist only a limited number of articles to be directly categorized as RWSN related works whereas there exist a range of articles in the robotics and the WSN literature that are also relevant to this new field of research. To connect the dots, we first identify the core problems and research trends related to RWSN such as connectivity, localization, routing, and robust flow of information. Next, we classify the existing research on RWSN as well as the relevant state-of-the-arts from robotics and WSN community according to the problems and trends identified in the first step. Lastly, we analyze what is missing in the existing literature, and identify topics that require more research attention in the future

    Multi-criteria Evolution of Neural Network Topologies: Balancing Experience and Performance in Autonomous Systems

    Full text link
    Majority of Artificial Neural Network (ANN) implementations in autonomous systems use a fixed/user-prescribed network topology, leading to sub-optimal performance and low portability. The existing neuro-evolution of augmenting topology or NEAT paradigm offers a powerful alternative by allowing the network topology and the connection weights to be simultaneously optimized through an evolutionary process. However, most NEAT implementations allow the consideration of only a single objective. There also persists the question of how to tractably introduce topological diversification that mitigates overfitting to training scenarios. To address these gaps, this paper develops a multi-objective neuro-evolution algorithm. While adopting the basic elements of NEAT, important modifications are made to the selection, speciation, and mutation processes. With the backdrop of small-robot path-planning applications, an experience-gain criterion is derived to encapsulate the amount of diverse local environment encountered by the system. This criterion facilitates the evolution of genes that support exploration, thereby seeking to generalize from a smaller set of mission scenarios than possible with performance maximization alone. The effectiveness of the single-objective (optimizing performance) and the multi-objective (optimizing performance and experience-gain) neuro-evolution approaches are evaluated on two different small-robot cases, with ANNs obtained by the multi-objective optimization observed to provide superior performance in unseen scenarios

    Genetic stigmergy: Framework and applications

    Get PDF
    Stigmergy has long been studied and recognized as an effective system for self-organization among social insects. Through the use of chemical agents known as pheromones, insect colonies are capable of complex collective behavior often beyond the scope of an individual agent. In an effort to develop human-made systems with the same robustness, scientists have created artificial analogues of pheromone-based stigmergy, but these systems often suffer from scalability and complexity issues due to the problems associated with mimicking the physics of pheromone diffusion. In this thesis, an alternative stigmergic framework called \u27Genetic Stigmergy\u27 is introduced. Using this framework, agents can indirectly share entire behavioral algorithms instead of pheromone traces that are limited in information content. The genetic constructs used in this framework allow for new avenues of research, including real-time evolution and adaptation of agents to complex environments. As a nascent test of its potential, experiments are performed using genetic stigmergy as an indirect communication framework for a simulated swarm of robots tasked with mapping an unknown environment. The robots are able to share their behavioral genes through environmentally distributed Radio-Frequency Identification cards. It was found that robots using a schema encouraging them to adopt lesser used behavioral genes (corresponding with novelty in exploration strategies) can generally cover more of an environment than agents who randomly switch their genes, but only if the environmental complexity is not too high. While the performance improvement is not statistically significant enough to clearly establish genetic stigmergy as a superior alternative to pheromonal-based artificial stigmergy, it is enough to warrant further research to develop its potential

    Optimizing collective fieldtaxis of swarming agents through reinforcement learning

    Full text link
    Swarming of animal groups enthralls scientists in fields ranging from biology to physics to engineering. Complex swarming patterns often arise from simple interactions between individuals to the benefit of the collective whole. The existence and success of swarming, however, nontrivially depend on microscopic parameters governing the interactions. Here we show that a machine-learning technique can be employed to tune these underlying parameters and optimize the resulting performance. As a concrete example, we take an active matter model inspired by schools of golden shiners, which collectively conduct phototaxis. The problem of optimizing the phototaxis capability is then mapped to that of maximizing benefits in a continuum-armed bandit game. The latter problem accepts a simple reinforcement-learning algorithm, which can tune the continuous parameters of the model. This result suggests the utility of machine-learning methodology in swarm-robotics applications.Comment: 6 pages, 3 figure
    corecore