33 research outputs found
Autonomous Drone Racing: Time-Optimal Spatial Iterative Learning Control within a Virtual Tube
It is often necessary for drones to complete delivery, photography, and
rescue in the shortest time to increase efficiency. Many autonomous drone races
provide platforms to pursue algorithms to finish races as quickly as possible
for the above purpose. Unfortunately, existing methods often fail to keep
training and racing time short in drone racing competitions. This motivates us
to develop a high-efficient learning method by imitating the training
experience of top racing drivers. Unlike traditional iterative learning control
methods for accurate tracking, the proposed approach iteratively learns a
trajectory online to finish the race as quickly as possible. Simulations and
experiments using different models show that the proposed approach is
model-free and is able to achieve the optimal result with low computation
requirements. Furthermore, this approach surpasses some state-of-the-art
methods in racing time on a benchmark drone racing platform. An experiment on a
real quadcopter is also performed to demonstrate its effectiveness
Autonomous Drone Racing with Deep Reinforcement Learning
In many robotic tasks, such as drone racing, the goal is to travel through a
set of waypoints as fast as possible. A key challenge for this task is planning
the minimum-time trajectory, which is typically solved by assuming perfect
knowledge of the waypoints to pass in advance. The resulting solutions are
either highly specialized for a single-track layout, or suboptimal due to
simplifying assumptions about the platform dynamics. In this work, a new
approach to minimum-time trajectory generation for quadrotors is presented.
Leveraging deep reinforcement learning and relative gate observations, this
approach can adaptively compute near-time-optimal trajectories for random track
layouts. Our method exhibits a significant computational advantage over
approaches based on trajectory optimization for non-trivial track
configurations. The proposed approach is evaluated on a set of race tracks in
simulation and the real world, achieving speeds of up to 17 m/s with a physical
quadrotor
A3C for drone autonomous driving using Airsim
[Abstract] In this work, we apply artificial intelligence to guide a drone to a certain point autonomously. Unreal engine creates a virtual environment where the drone can fly, and the algorithm is trained simulating the drone dynamics thanks to Airsim plugin. The implemented algorithm is Asynchronous Actor-Critic Advantage (A3C), which trains a neural network with less computing resources than standard reinforcement learning algorithms that normally needs costly GPUs. To prove these advantages, several experiments are run using a different number of parallel simulations (threads). The drone should reach a point randomly generated each episode. The reward, the value and the advantage function are used to evaluate the performance. As expected, these experiments show that a higher number of threads helps the leaning process improve and become more stable. These learning results are of interest to optimize the computing resources in future applications
AutoTune: Controller Tuning for High-Speed Flight
Due to noisy actuation and external disturbances, tuning controllers for
high-speed flight is very challenging. In this paper, we ask the following
questions: How sensitive are controllers to tuning when tracking high-speed
maneuvers? What algorithms can we use to automatically tune them? To answer the
first question, we study the relationship between parameters and performance
and find out that the faster the maneuver, the more sensitive a controller
becomes to its parameters. To answer the second question, we review existing
methods for controller tuning and discover that prior works often perform
poorly on the task of high-speed flight. Therefore, we propose AutoTune, a
sampling-based tuning algorithm specifically tailored to high-speed flight. In
contrast to previous work, our algorithm does not assume any prior knowledge of
the drone or its optimization function and can deal with the multi-modal
characteristics of the parameters' optimization space. We thoroughly evaluate
AutoTune both in simulation and in the physical world. In our experiments, we
outperform existing tuning algorithms by up to 90\% in trajectory completion.
The resulting controllers are tested in the AirSim Game of Drones competition,
where we outperform the winner by up to 25\% in lap-time. Finally, we show that
AutoTune improves tracking error when flying a physical platform with respect
to parameters tuned by a human expert.Comment: Video: https://youtu.be/m2q_y7C01So; Code:
https://github.com/uzh-rpg/mh_autotun
Close Formation Flight Missions Using Vision-Based Position Detection System
In this thesis, a formation flight architecture is described along with the implementation and evaluation of a state-of-the-art vision-based algorithm for solving the problem of estimating and tracking a leader vehicle within a close-formation configuration. A vision-based algorithm that uses Darknet architecture and a formation flight control law to track and follow a leader with desired clearance in forward, lateral directions are developed and implemented. The architecture is run on a flight computer that handles the process in real-time while integrating navigation sensors and a stereo camera. Numerical simulations along with indoor and outdoor actual flight tests demonstrate the capabilities of detection and tracking by providing a low cost, compact size and low weight solution for the problem of estimating the location of other cooperative or non-cooperative flying vehicles within a formation architecture