5 research outputs found
Beauty and the Beast: Optimal Methods Meet Learning for Drone Racing
Autonomous micro aerial vehicles still struggle with fast and agile
maneuvers, dynamic environments, imperfect sensing, and state estimation drift.
Autonomous drone racing brings these challenges to the fore. Human pilots can
fly a previously unseen track after a handful of practice runs. In contrast,
state-of-the-art autonomous navigation algorithms require either a precise
metric map of the environment or a large amount of training data collected in
the track of interest. To bridge this gap, we propose an approach that can fly
a new track in a previously unseen environment without a precise map or
expensive data collection. Our approach represents the global track layout with
coarse gate locations, which can be easily estimated from a single
demonstration flight. At test time, a convolutional network predicts the poses
of the closest gates along with their uncertainty. These predictions are
incorporated by an extended Kalman filter to maintain optimal
maximum-a-posteriori estimates of gate locations. This allows the framework to
cope with misleading high-variance estimates that could stem from poor
observability or lack of visible gates. Given the estimated gate poses, we use
model predictive control to quickly and accurately navigate through the track.
We conduct extensive experiments in the physical world, demonstrating agile and
robust flight through complex and diverse previously-unseen race tracks. The
presented approach was used to win the IROS 2018 Autonomous Drone Race
Competition, outracing the second-placing team by a factor of two.Comment: 6 pages (+1 references
Visual-Inertial Teach and Repeat for Aerial Inspection
Industrial facilities often require periodic visual
inspections of key installations. Examining these points of
interest is time consuming, potentially hazardous or require
special equipment to reach. Micro Air Vehicles (MAVs) are ideal
platforms to automate this expensive and tedious task. In this
work we present a novel system that enables a human operator
to teach a visual inspection task to an autonomous aerial vehicle
by simply demonstrating the task using a handheld device. To
enable robust operation in confined, GPS-denied environments,
the system employs the Google Tango visual-inertial mapping
framework [1] as the only source of pose estimates. In a
first step the operator records the desired inspection path and
defines the inspection points. The mapping framework then
computes a feature-based localization map, which is shared with
the robot. After take-off, the robot estimates its pose based on
this map and plans a smooth trajectory through the waypoints
defined by the operator. Furthermore, the system is able to
track the poses of other robots or the operator, localized in the
same map, and follow them in real-time while keeping a safe
distance