685 research outputs found
Reinforcement Learning for UAV Attitude Control
Autopilot systems are typically composed of an "inner loop" providing
stability and control, while an "outer loop" is responsible for mission-level
objectives, e.g. way-point navigation. Autopilot systems for UAVs are
predominately implemented using Proportional, Integral Derivative (PID) control
systems, which have demonstrated exceptional performance in stable
environments. However more sophisticated control is required to operate in
unpredictable, and harsh environments. Intelligent flight control systems is an
active area of research addressing limitations of PID control most recently
through the use of reinforcement learning (RL) which has had success in other
applications such as robotics. However previous work has focused primarily on
using RL at the mission-level controller. In this work, we investigate the
performance and accuracy of the inner control loop providing attitude control
when using intelligent flight control systems trained with the state-of-the-art
RL algorithms, Deep Deterministic Gradient Policy (DDGP), Trust Region Policy
Optimization (TRPO) and Proximal Policy Optimization (PPO). To investigate
these unknowns we first developed an open-source high-fidelity simulation
environment to train a flight controller attitude control of a quadrotor
through RL. We then use our environment to compare their performance to that of
a PID controller to identify if using RL is appropriate in high-precision,
time-critical flight control.Comment: 13 pages, 9 figure
MOMA: Visual Mobile Marker Odometry
In this paper, we present a cooperative odometry scheme based on the
detection of mobile markers in line with the idea of cooperative positioning
for multiple robots [1]. To this end, we introduce a simple optimization scheme
that realizes visual mobile marker odometry via accurate fixed marker-based
camera positioning and analyse the characteristics of errors inherent to the
method compared to classical fixed marker-based navigation and visual odometry.
In addition, we provide a specific UAV-UGV configuration that allows for
continuous movements of the UAV without doing stops and a minimal
caterpillar-like configuration that works with one UGV alone. Finally, we
present a real-world implementation and evaluation for the proposed UAV-UGV
configuration
Model-Free Control of an Unmanned Aircraft Quadcopter Type System
A model-free control algorithm based on the sliding mode control method for unmanned aircraft systems is proposed. The mathematical model of the dynamic system is not required to derive the sliding mode control law for this proposed method. The knowledge of the system’s order, state measurements and control input gain matrix shape and bounds are assumed to derive the control law to track the required trajectories. Lyapunov’s Stability criteria is used to ensure closed-loop asymptotic stability and the error estimate between previous control inputs is used to stabilize the system. A smoothing boundary layer is introduced into the system to eliminate the high frequency chattering of the control input and the higher order states. The [B] matrix used in the model-free algorithm based on the sliding mode control is derived for a quadcopter system. A simulation of a quadcopter is built in Simulink and the model-free control algorithm based on sliding mode control is implemented and a PID control law is used to compare the performance of the model-free control algorithm based off of the RMS (Root-Mean-Square) of the difference between the actual state and the desired state as well as average power usage. The model-free algorithm outperformed the PID controller in all simulations with the quadcopter’s original parameters, double the mass, double the moments of inertia, and double both the mass and the moments of inertia while keep both controllers exactly the same for each simulation
Deep Continuum Deformation Coordination and Optimization with Safety Guarantees
In this paper, we develop and present a novel strategy for safe coordination
of a large-scale multi-agent team with ``\textit{local deformation}"
capabilities. Multi-agent coordination is defined by our proposed method as a
multi-layer deformation problem specified as a Deep Neural Network (DNN)
optimization problem. The proposed DNN consists of hidden layers, each of
which contains artificial neurons representing unique agents. Furthermore,
based on the desired positions of the agents of hidden layer
(), the desired deformation of the agents of hidden layer is planned. In contrast to the available neural network learning problems,
our proposed neural network optimization receives time-invariant reference
positions of the boundary agents as inputs and trains the weights based on the
desired trajectory of the agent team configuration, where the weights are
constrained by certain lower and upper bounds to ensure inter-agent collision
avoidance. We simulate and provide the results of a large-scale quadcopter team
coordination tracking a desired elliptical trajectory to validate the proposed
approach.Comment: 6 pages, accepted at ACC 202
Flat trajectory design and tracking with saturation guarantees: a nano-drone application
International audienceThis paper deals with the problem of trajectory planning and tracking of a quadcopter system based on the property of differential flatness. First, B-spline characterisations of the flat output allow for optimal trajectory generation subject to waypoint constraints, thrust and angle constraints while minimising the trajectory length. Second, the proposed tracking control strategy combines feedback linearisation and nested saturation control via flatness. The control strategy provides bounded inputs (thrust, roll and pitch angles) while ensuring the overall stability of the tracking error dynamics. The control parameters are chosen based on the information of the a priori given reference trajectory. Moreover, conditions for the existence of these parameters are presented. The effectiveness of the trajectory planning and the tracking control design is analysed and validated through simulation and experimental results over a real nano-quadcopter platform, the Crazyflie 2.0
A Zero-Shot Adaptive Quadcopter Controller
This paper proposes a universal adaptive controller for quadcopters, which
can be deployed zero-shot to quadcopters of very different mass, arm lengths
and motor constants, and also shows rapid adaptation to unknown disturbances
during runtime. The core algorithmic idea is to learn a single policy that can
adapt online at test time not only to the disturbances applied to the drone,
but also to the robot dynamics and hardware in the same framework. We achieve
this by training a neural network to estimate a latent representation of the
robot and environment parameters, which is used to condition the behaviour of
the controller, also represented as a neural network. We train both networks
exclusively in simulation with the goal of flying the quadcopters to goal
positions and avoiding crashes to the ground. We directly deploy the same
controller trained in the simulation without any modifications on two
quadcopters with differences in mass, inertia, and maximum motor speed of up to
4 times. In addition, we show rapid adaptation to sudden and large disturbances
(up to 35.7%) in the mass and inertia of the quadcopters. We perform an
extensive evaluation in both simulation and the physical world, where we
outperform a state-of-the-art learning-based adaptive controller and a
traditional PID controller specifically tuned to each platform individually.
Video results can be found at
https://dz298.github.io/universal-drone-controller/.Comment: Video results can be found on the project webpage
https://dz298.github.io/universal-drone-controller
Optimized Neural Networks-PID Controller with Wind Rejection Strategy for a Quad-Rotor
In this paper a full approach of modeling and intelligent control of a four rotor unmanned air vehicle (UAV) known as quad-rotor aircraft is presented. In fact, a PID on-line optimized Neural Networks Approach (PID-NN) is developed to be applied to angular trajectories control of a quad-rotor. Whereas, PID classical controllers are dedicated for the positions, altitude and speed control. The goal of this work is to concept a smart Self-Tuning PID controller, for attitude angles control, based on neural networks able to supervise the quad-rotor for an optimized behavior while tracking a desired trajectory. Many challenges could arise if the quad-rotor is navigating in hostile environments presenting irregular disturbances in the form of wind modeled and applied to the overall system. The quad-rotor has to quickly perform tasks while ensuring stability and accuracy and must behave rapidly with regards to decision making facing disturbances. This technique offers some advantages over conventional control methods such as PID controller. Simulation results are founded on a comparative study between PID and PID-NN controllers based on wind disturbances. These later are applied with several degrees of strength to test the quad-rotor behavior and stability. These simulation results are satisfactory and have demonstrated the effectiveness of the proposed PD-NN approach. In fact, the proposed controller has relatively smaller errors than the PD controller and has a better capability to reject disturbances. In addition, it has proven to be highly robust and efficient face to turbulences in the form of wind disturbances
- …