1,139 research outputs found
Adaptive and learning-based formation control of swarm robots
Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation
Review of PID Controller Applications for UAVs
Unmanned Aerial Vehicles (UAVs) have gained widespread recognition for their
diverse applications, ranging from surveillance to delivery services. Among the
various control algorithms employed to stabilize and navigate UAVs, the
Proportional-Integral-Derivative (PID) controller stands out as a classical yet
robust solution. This review provides a comprehensive examination of PID
controller applications in the context of UAVs, addressing their fundamental
principles, dynamics modeling, stability control, navigation tasks, parameter
tuning methods, challenges, and future directions
A Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs
Multirotor UAVs are used for a wide spectrum of civilian and public domain
applications. Navigation controllers endowed with different attributes and
onboard sensor suites enable multirotor autonomous or semi-autonomous, safe
flight, operation, and functionality under nominal and detrimental conditions
and external disturbances, even when flying in uncertain and dynamically
changing environments. During the last decade, given the
faster-than-exponential increase of available computational power, different
learning-based algorithms have been derived, implemented, and tested to
navigate and control, among other systems, multirotor UAVs. Learning algorithms
have been, and are used to derive data-driven based models, to identify
parameters, to track objects, to develop navigation controllers, and to learn
the environment in which multirotors operate. Learning algorithms combined with
model-based control techniques have been proven beneficial when applied to
multirotors. This survey summarizes published research since 2015, dividing
algorithms, techniques, and methodologies into offline and online learning
categories, and then, further classifying them into machine learning, deep
learning, and reinforcement learning sub-categories. An integral part and focus
of this survey are on online learning algorithms as applied to multirotors with
the aim to register the type of learning techniques that are either hard or
almost hard real-time implementable, as well as to understand what information
is learned, why, and how, and how fast. The outcome of the survey offers a
clear understanding of the recent state-of-the-art and of the type and kind of
learning-based algorithms that may be implemented, tested, and executed in
real-time.Comment: 26 pages, 6 figures, 4 tables, Survey Pape
Neural-Swarm: Decentralized Close-Proximity Multirotor Control Using Learned Interactions
In this paper, we present Neural-Swarm, a nonlinear decentralized stable controller for close-proximity flight of multirotor swarms. Close-proximity control is challenging due to the complex aerodynamic interaction effects between multirotors, such as downwash from higher vehicles to lower ones. Conventional methods often fail to properly capture these interaction effects, resulting in controllers that must maintain large safety distances between vehicles, and thus are not capable of close-proximity flight. Our approach combines a nominal dynamics model with a regularized permutation-invariant Deep Neural Network (DNN) that accurately learns the high-order multi-vehicle interactions. We design a stable nonlinear tracking controller using the learned model. Experimental results demonstrate that the proposed controller significantly outperforms a baseline nonlinear tracking controller with up to four times smaller worst-case height tracking errors. We also empirically demonstrate the ability of our learned model to generalize to larger swarm sizes
- …