136 research outputs found

    Swarm Robotics: An Extensive Research Review

    Get PDF

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    A Comprehensive Survey of Multiagent Reinforcement Learning

    Full text link

    A Deep Learning-Based Fault Diagnosis of Leader-Following Systems

    Get PDF
    This paper develops a multisensor data fusion-based deep learning algorithm to locate and classify faults in a leader-following multiagent system. First, sequences of one-dimensional data collected from multiple sensors of followers are fused into a two-dimensional image. Then, the image is employed to train a convolution neural network with a batch normalisation layer. The trained network can locate and classify three typical fault types: the actuator limitation fault, the sensor failure and the communication failure. Moreover, faults can exist in both leaders and followers, and the faults in leaders can be identified through data from followers, indicating that the developed deep learning fault diagnosis is distributed. The effectiveness of the deep learning-based fault diagnosis algorithm is demonstrated via Quanser Servo 2 rotating inverted pendulums with a leader-follower protocol. From the experimental results, the fault classification accuracy can reach 98.9%
    • …
    corecore