2,260 research outputs found

    Robotic Wireless Sensor Networks

    Full text link
    In this chapter, we present a literature survey of an emerging, cutting-edge, and multi-disciplinary field of research at the intersection of Robotics and Wireless Sensor Networks (WSN) which we refer to as Robotic Wireless Sensor Networks (RWSN). We define a RWSN as an autonomous networked multi-robot system that aims to achieve certain sensing goals while meeting and maintaining certain communication performance requirements, through cooperative control, learning and adaptation. While both of the component areas, i.e., Robotics and WSN, are very well-known and well-explored, there exist a whole set of new opportunities and research directions at the intersection of these two fields which are relatively or even completely unexplored. One such example would be the use of a set of robotic routers to set up a temporary communication path between a sender and a receiver that uses the controlled mobility to the advantage of packet routing. We find that there exist only a limited number of articles to be directly categorized as RWSN related works whereas there exist a range of articles in the robotics and the WSN literature that are also relevant to this new field of research. To connect the dots, we first identify the core problems and research trends related to RWSN such as connectivity, localization, routing, and robust flow of information. Next, we classify the existing research on RWSN as well as the relevant state-of-the-arts from robotics and WSN community according to the problems and trends identified in the first step. Lastly, we analyze what is missing in the existing literature, and identify topics that require more research attention in the future

    Cost Adaptation for Robust Decentralized Swarm Behaviour

    Full text link
    Decentralized receding horizon control (D-RHC) provides a mechanism for coordination in multi-agent settings without a centralized command center. However, combining a set of different goals, costs, and constraints to form an efficient optimization objective for D-RHC can be difficult. To allay this problem, we use a meta-learning process -- cost adaptation -- which generates the optimization objective for D-RHC to solve based on a set of human-generated priors (cost and constraint functions) and an auxiliary heuristic. We use this adaptive D-RHC method for control of mesh-networked swarm agents. This formulation allows a wide range of tasks to be encoded and can account for network delays, heterogeneous capabilities, and increasingly large swarms through the adaptation mechanism. We leverage the Unity3D game engine to build a simulator capable of introducing artificial networking failures and delays in the swarm. Using the simulator we validate our method on an example coordinated exploration task. We demonstrate that cost adaptation allows for more efficient and safer task completion under varying environment conditions and increasingly large swarm sizes. We release our simulator and code to the community for future work.Comment: Accepted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 201

    Embodied Evolution in Collective Robotics: A Review

    Get PDF
    This paper provides an overview of evolutionary robotics techniques applied to on-line distributed evolution for robot collectives -- namely, embodied evolution. It provides a definition of embodied evolution as well as a thorough description of the underlying concepts and mechanisms. The paper also presents a comprehensive summary of research published in the field since its inception (1999-2017), providing various perspectives to identify the major trends. In particular, we identify a shift from considering embodied evolution as a parallel search method within small robot collectives (fewer than 10 robots) to embodied evolution as an on-line distributed learning method for designing collective behaviours in swarm-like collectives. The paper concludes with a discussion of applications and open questions, providing a milestone for past and an inspiration for future research.Comment: 23 pages, 1 figure, 1 tabl

    Q-Learning Adjusted Bio-Inspired Multi-Robot Coordination

    Get PDF

    Swarm based Optimization Algorithms for Task Allocation in Multi Robot Systems: A Comprehensive Review

    Get PDF
    Multi-robot systems (MRS) have gained significant attention due to their potential applications in various domains such as search and rescue, surveillance, and exploration. An essential aspect of MRS is task allocation, which involves distributing tasks among robots efficiently to achieve collective objectives. Swarm-based optimization algorithms have emerged as effective approaches for task allocation in MRS, leveraging principles inspired by natural swarms to coordinate the actions of multiple robots. This paper provides a comprehensive review of swarm-based optimization algorithms for task allocation in MRS, highlighting their principles, advantages, challenges, and applications. The discussion encompasses key algorithmic approaches, including ant colony optimization, particle swarm optimization, and artificial bee colony optimization, along with recent advancements and future research directions in this field

    Towards a Smart World: Hazard Levels for Monitoring of Autonomous Vehicles’ Swarms

    Get PDF
    This work explores the creation of quantifiable indices to monitor the safe operations and movement of families of autonomous vehicles (AV) in restricted highway-like environments. Specifically, this work will explore the creation of ad-hoc rules for monitoring lateral and longitudinal movement of multiple AVs based on behavior that mimics swarm and flock movement (or particle swarm motion). This exploratory work is sponsored by the Emerging Leader Seed grant program of the Mineta Transportation Institute and aims at investigating feasibility of adaptation of particle swarm motion to control families of autonomous vehicles. Specifically, it explores how particle swarm approaches can be augmented by setting safety thresholds and fail-safe mechanisms to avoid collisions in off-nominal situations. This concept leverages the integration of the notion of hazard and danger levels (i.e., measures of the “closeness” to a given accident scenario, typically used in robotics) with the concept of safety distance and separation/collision avoidance for ground vehicles. A draft of implementation of four hazard level functions indicates that safety thresholds can be set up to autonomously trigger lateral and longitudinal motion control based on three main rules respectively based on speed, heading, and braking distance to steer the vehicle and maintain separation/avoid collisions in families of autonomous vehicles. The concepts here presented can be used to set up a high-level framework for developing artificial intelligence algorithms that can serve as back-up to standard machine learning approaches for control and steering of autonomous vehicles. Although there are no constraints on the concept’s implementation, it is expected that this work would be most relevant for highly-automated Level 4 and Level 5 vehicles, capable of communicating with each other and in the presence of a monitoring ground control center for the operations of the swarm

    Spatio-Temporal Patterns act as Computational Mechanisms governing Emergent behavior in Robotic Swarms

    Get PDF
    open access articleOur goal is to control a robotic swarm without removing its swarm-like nature. In other words, we aim to intrinsically control a robotic swarm emergent behavior. Past attempts at governing robotic swarms or their selfcoordinating emergent behavior, has proven ineffective, largely due to the swarm’s inherent randomness (making it difficult to predict) and utter simplicity (they lack a leader, any kind of centralized control, long-range communication, global knowledge, complex internal models and only operate on a couple of basic, reactive rules). The main problem is that emergent phenomena itself is not fully understood, despite being at the forefront of current research. Research into 1D and 2D Cellular Automata has uncovered a hidden computational layer which bridges the micromacro gap (i.e., how individual behaviors at the micro-level influence the global behaviors on the macro-level). We hypothesize that there also lie embedded computational mechanisms at the heart of a robotic swarm’s emergent behavior. To test this theory, we proceeded to simulate robotic swarms (represented as both particles and dynamic networks) and then designed local rules to induce various types of intelligent, emergent behaviors (as well as designing genetic algorithms to evolve robotic swarms with emergent behaviors). Finally, we analysed these robotic swarms and successfully confirmed our hypothesis; analyzing their developments and interactions over time revealed various forms of embedded spatiotemporal patterns which store, propagate and parallel process information across the swarm according to some internal, collision-based logic (solving the mystery of how simple robots are able to self-coordinate and allow global behaviors to emerge across the swarm)

    Human Swarm Interaction for Radiation Source Search and Localization

    Get PDF
    This study shows that appropriate human interaction can benefit a swarm of robots to achieve goals more efficiently. A set of desirable features for human swarm interaction is identified based on the principles of swarm robotics. Human swarm interaction architecture is then proposed that has all of the desirable features. A swarm simulation environment is created that allows simulating a swarm behavior in an indoor environment. The swarm behavior and the results of user interaction are studied by considering radiation source search and localization application of the swarm. Particle swarm optimization algorithm is slightly modified to enable the swarm to autonomously explore the indoor environment for radiation source search and localization. The emergence of intelligence is observed that enables the swarm to locate the radiation source completely on its own. Proposed human swarm interaction is then integrated in a simulation environment and user evaluation experiments are conducted. Participants are introduced to the interaction tool and asked to deploy the swarm to complete the missions. The performance comparison of the user guided swarm to that of the autonomous swarm shows that the interaction interface is fairly easy to learn and that user guided swarm is more efficient in achieving the goals. The results clearly indicate that the proposed interaction helped the swarm achieve emergence

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation
    • …
    corecore