557 research outputs found

    Formation control of a group of micro aerial vehicles (MAVs)

    Get PDF
    Coordinated motion of Unmanned Aerial Vehicles (UAVs) has been a growing research interest in the last decade. In this paper we propose a coordination model that makes use of virtual springs and dampers to generate reference trajectories for a group of quadrotors. Virtual forces exerted on each vehicle are produced by using projected distances between the quadrotors. Several coordinated task scenarios are presented and the performance of the proposed method is verified by simulations

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    Comprehensive review on controller for leader-follower robotic system

    Get PDF
    985-1007This paper presents a comprehensive review of the leader-follower robotics system. The aim of this paper is to find and elaborate on the current trends in the swarm robotic system, leader-follower, and multi-agent system. Another part of this review will focus on finding the trend of controller utilized by previous researchers in the leader-follower system. The controller that is commonly applied by the researchers is mostly adaptive and non-linear controllers. The paper also explores the subject of study or system used during the research which normally employs multi-robot, multi-agent, space flying, reconfigurable system, multi-legs system or unmanned system. Another aspect of this paper concentrates on the topology employed by the researchers when they conducted simulation or experimental studies

    Neural-Swarm: Decentralized Close-Proximity Multirotor Control Using Learned Interactions

    Get PDF
    In this paper, we present Neural-Swarm, a nonlinear decentralized stable controller for close-proximity flight of multirotor swarms. Close-proximity control is challenging due to the complex aerodynamic interaction effects between multirotors, such as downwash from higher vehicles to lower ones. Conventional methods often fail to properly capture these interaction effects, resulting in controllers that must maintain large safety distances between vehicles, and thus are not capable of close-proximity flight. Our approach combines a nominal dynamics model with a regularized permutation-invariant Deep Neural Network (DNN) that accurately learns the high-order multi-vehicle interactions. We design a stable nonlinear tracking controller using the learned model. Experimental results demonstrate that the proposed controller significantly outperforms a baseline nonlinear tracking controller with up to four times smaller worst-case height tracking errors. We also empirically demonstrate the ability of our learned model to generalize to larger swarm sizes

    Multi-Agent Reinforcement Learning for the Low-Level Control of a Quadrotor UAV

    Full text link
    This paper presents multi-agent reinforcement learning frameworks for the low-level control of a quadrotor UAV. While single-agent reinforcement learning has been successfully applied to quadrotors, training a single monolithic network is often data-intensive and time-consuming. To address this, we decompose the quadrotor dynamics into the translational dynamics and the yawing dynamics, and assign a reinforcement learning agent to each part for efficient training and performance improvements. The proposed multi-agent framework for quadrotor low-level control that leverages the underlying structures of the quadrotor dynamics is a unique contribution. Further, we introduce regularization terms to mitigate steady-state errors and to avoid aggressive control inputs. Through benchmark studies with sim-to-sim transfer, it is illustrated that the proposed multi-agent reinforcement learning substantially improves the convergence rate of the training and the stability of the controlled dynamics.Comment: 8 pages, 6 figures, 3 table

    Quadrotor team modeling and control for DLO transportation

    Get PDF
    94 p.Esta Tesis realiza una propuesta de un modelado dinámico para el transporte de sólidos lineales deformables (SLD) mediante un equipo de cuadricópteros. En este modelo intervienen tres factores: - Modelado dinámico del sólido lineal a transportar. - Modelo dinámico del cuadricóptero para que tenga en cuenta la dinámica pasiva y los efectos del SLD. - Estrategia de control para un transporte e ciente y robusto. Diferenciamos dos tareas principales: (a) lograr una con guración cuasiestacionaria de una distribución de carga equivalente a transportar entre todos los robots. (b) Ejecutar el transporte en un plano horizontal de todo el sistema. El transporte se realiza mediante una con guración de seguir al líder en columna, pero los cuadricópteros individualmente tienen que ser su cientemente robustos para afrontar todas las no-linealidades provocadas por la dinámica del SLD y perturbaciones externas, como el viento. Los controladores del cuadricóptero se han diseñado para asegurar la estabilidad del sistema y una rápida convergencia del sistema. Se han comparado y testeado estrategias de control en tiempo real y no-real para comprobar la bondad y capacidad de ajuste a las condiciones dinámicas cambiantes del sistema. También se ha estudiado la escalabilidad del sistema
    corecore