35 research outputs found
Beauty and the Beast: Optimal Methods Meet Learning for Drone Racing
Autonomous micro aerial vehicles still struggle with fast and agile
maneuvers, dynamic environments, imperfect sensing, and state estimation drift.
Autonomous drone racing brings these challenges to the fore. Human pilots can
fly a previously unseen track after a handful of practice runs. In contrast,
state-of-the-art autonomous navigation algorithms require either a precise
metric map of the environment or a large amount of training data collected in
the track of interest. To bridge this gap, we propose an approach that can fly
a new track in a previously unseen environment without a precise map or
expensive data collection. Our approach represents the global track layout with
coarse gate locations, which can be easily estimated from a single
demonstration flight. At test time, a convolutional network predicts the poses
of the closest gates along with their uncertainty. These predictions are
incorporated by an extended Kalman filter to maintain optimal
maximum-a-posteriori estimates of gate locations. This allows the framework to
cope with misleading high-variance estimates that could stem from poor
observability or lack of visible gates. Given the estimated gate poses, we use
model predictive control to quickly and accurately navigate through the track.
We conduct extensive experiments in the physical world, demonstrating agile and
robust flight through complex and diverse previously-unseen race tracks. The
presented approach was used to win the IROS 2018 Autonomous Drone Race
Competition, outracing the second-placing team by a factor of two.Comment: 6 pages (+1 references
Cerberus: Low-Drift Visual-Inertial-Leg Odometry For Agile Locomotion
We present an open-source Visual-Inertial-Leg Odometry (VILO) state
estimation solution, Cerberus, for legged robots that estimates position
precisely on various terrains in real time using a set of standard sensors,
including stereo cameras, IMU, joint encoders, and contact sensors. In addition
to estimating robot states, we also perform online kinematic parameter
calibration and contact outlier rejection to substantially reduce position
drift. Hardware experiments in various indoor and outdoor environments validate
that calibrating kinematic parameters within the Cerberus can reduce estimation
drift to lower than 1% during long distance high speed locomotion. Our drift
results are better than any other state estimation method using the same set of
sensors reported in the literature. Moreover, our state estimator performs well
even when the robot is experiencing large impacts and camera occlusion. The
implementation of the state estimator, along with the datasets used to compute
our results, are available at https://github.com/ShuoYangRobotics/Cerberus.Comment: 7 pages, 6 figures, submitted to IEEE ICRA 202
LiDAR Enhanced Structure-from-Motion
Although Structure-from-Motion (SfM) as a maturing technique has been widely
used in many applications, state-of-the-art SfM algorithms are still not robust
enough in certain situations. For example, images for inspection purposes are
often taken in close distance to obtain detailed textures, which will result in
less overlap between images and thus decrease the accuracy of estimated motion.
In this paper, we propose a LiDAR-enhanced SfM pipeline that jointly processes
data from a rotating LiDAR and a stereo camera pair to estimate sensor motions.
We show that incorporating LiDAR helps to effectively reject falsely matched
images and significantly improve the model consistency in large-scale
environments. Experiments are conducted in different environments to test the
performance of the proposed pipeline and comparison results with the
state-of-the-art SfM algorithms are reported.Comment: 6 pages plus reference. Work has been submitted to ICRA 202
Dense RGB-D-Inertial SLAM with Map Deformations
While dense visual SLAM methods are capable of estimating dense
reconstructions of the environment, they suffer from a lack of robustness in
their tracking step, especially when the optimisation is poorly initialised.
Sparse visual SLAM systems have attained high levels of accuracy and robustness
through the inclusion of inertial measurements in a tightly-coupled fusion.
Inspired by this performance, we propose the first tightly-coupled dense
RGB-D-inertial SLAM system.
Our system has real-time capability while running on a GPU. It jointly
optimises for the camera pose, velocity, IMU biases and gravity direction while
building up a globally consistent, fully dense surfel-based 3D reconstruction
of the environment. Through a series of experiments on both synthetic and real
world datasets, we show that our dense visual-inertial SLAM system is more
robust to fast motions and periods of low texture and low geometric variation
than a related RGB-D-only SLAM system.Comment: Accepted at IROS 2017; supplementary video available at
https://youtu.be/-gUdQ0cxDh