1,298 research outputs found
Learning Unmanned Aerial Vehicle Control for Autonomous Target Following
While deep reinforcement learning (RL) methods have achieved unprecedented
successes in a range of challenging problems, their applicability has been
mainly limited to simulation or game domains due to the high sample complexity
of the trial-and-error learning process. However, real-world robotic
applications often need a data-efficient learning process with safety-critical
constraints. In this paper, we consider the challenging problem of learning
unmanned aerial vehicle (UAV) control for tracking a moving target. To acquire
a strategy that combines perception and control, we represent the policy by a
convolutional neural network. We develop a hierarchical approach that combines
a model-free policy gradient method with a conventional feedback
proportional-integral-derivative (PID) controller to enable stable learning
without catastrophic failure. The neural network is trained by a combination of
supervised learning from raw images and reinforcement learning from games of
self-play. We show that the proposed approach can learn a target following
policy in a simulator efficiently and the learned behavior can be successfully
transferred to the DJI quadrotor platform for real-world UAV control
System Identification of multi-rotor UAVs using echo state networks
Controller design for aircraft with unusual configurations presents unique challenges, particularly in extracting valid mathematical models of the MRUAVs behaviour. System Identification is a collection of techniques for extracting an accurate mathematical model of a dynamic system from experimental input-output data. This can entail parameter identification only (known as grey-box modelling) or more generally full parameter/structural identification of the nonlinear mapping (known as black-box). In this paper we propose a new method for black-box identification of the non-linear dynamic model of a small MRUAV using Echo State Networks (ESN), a novel approach to train Recurrent Neural Networks (RNN)
Deep Drone Racing: From Simulation to Reality with Domain Randomization
Dynamically changing environments, unreliable state estimation, and operation
under severe resource constraints are fundamental challenges that limit the
deployment of small autonomous drones. We address these challenges in the
context of autonomous, vision-based drone racing in dynamic environments. A
racing drone must traverse a track with possibly moving gates at high speed. We
enable this functionality by combining the performance of a state-of-the-art
planning and control system with the perceptual awareness of a convolutional
neural network (CNN). The resulting modular system is both platform- and
domain-independent: it is trained in simulation and deployed on a physical
quadrotor without any fine-tuning. The abundance of simulated data, generated
via domain randomization, makes our system robust to changes of illumination
and gate appearance. To the best of our knowledge, our approach is the first to
demonstrate zero-shot sim-to-real transfer on the task of agile drone flight.
We extensively test the precision and robustness of our system, both in
simulation and on a physical platform, and show significant improvements over
the state of the art.Comment: Accepted as a Regular Paper to the IEEE Transactions on Robotics
Journal. arXiv admin note: substantial text overlap with arXiv:1806.0854
Online Deep Learning for Improved Trajectory Tracking of Unmanned Aerial Vehicles Using Expert Knowledge
This work presents an online learning-based control method for improved
trajectory tracking of unmanned aerial vehicles using both deep learning and
expert knowledge. The proposed method does not require the exact model of the
system to be controlled, and it is robust against variations in system dynamics
as well as operational uncertainties. The learning is divided into two phases:
offline (pre-)training and online (post-)training. In the former, a
conventional controller performs a set of trajectories and, based on the
input-output dataset, the deep neural network (DNN)-based controller is
trained. In the latter, the trained DNN, which mimics the conventional
controller, controls the system. Unlike the existing papers in the literature,
the network is still being trained for different sets of trajectories which are
not used in the training phase of DNN. Thanks to the rule-base, which contains
the expert knowledge, the proposed framework learns the system dynamics and
operational uncertainties in real-time. The experimental results show that the
proposed online learning-based approach gives better trajectory tracking
performance when compared to the only offline trained network.Comment: corrected version accepted for ICRA 201
- …