240 research outputs found
Fast, Autonomous Flight in GPS-Denied and Cluttered Environments
One of the most challenging tasks for a flying robot is to autonomously
navigate between target locations quickly and reliably while avoiding obstacles
in its path, and with little to no a-priori knowledge of the operating
environment. This challenge is addressed in the present paper. We describe the
system design and software architecture of our proposed solution, and showcase
how all the distinct components can be integrated to enable smooth robot
operation. We provide critical insight on hardware and software component
selection and development, and present results from extensive experimental
testing in real-world warehouse environments. Experimental testing reveals that
our proposed solution can deliver fast and robust aerial robot autonomous
navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field
Robotic
Deep Drone Racing: From Simulation to Reality with Domain Randomization
Dynamically changing environments, unreliable state estimation, and operation
under severe resource constraints are fundamental challenges that limit the
deployment of small autonomous drones. We address these challenges in the
context of autonomous, vision-based drone racing in dynamic environments. A
racing drone must traverse a track with possibly moving gates at high speed. We
enable this functionality by combining the performance of a state-of-the-art
planning and control system with the perceptual awareness of a convolutional
neural network (CNN). The resulting modular system is both platform- and
domain-independent: it is trained in simulation and deployed on a physical
quadrotor without any fine-tuning. The abundance of simulated data, generated
via domain randomization, makes our system robust to changes of illumination
and gate appearance. To the best of our knowledge, our approach is the first to
demonstrate zero-shot sim-to-real transfer on the task of agile drone flight.
We extensively test the precision and robustness of our system, both in
simulation and on a physical platform, and show significant improvements over
the state of the art.Comment: Accepted as a Regular Paper to the IEEE Transactions on Robotics
Journal. arXiv admin note: substantial text overlap with arXiv:1806.0854
Robust Active Visual Perching with Quadrotors on Inclined Surfaces
Autonomous Micro Aerial Vehicles are deployed for a variety tasks including
surveillance and monitoring. Perching and staring allow the vehicle to monitor
targets without flying, saving battery power and increasing the overall mission
time without the need to frequently replace batteries. This paper addresses the
Active Visual Perching (AVP) control problem to autonomously perch on inclined
surfaces up to . Our approach generates dynamically feasible
trajectories to navigate and perch on a desired target location, while taking
into account actuator and Field of View (FoV) constraints. By replanning in
mid-flight, we take advantage of more accurate target localization increasing
the perching maneuver's robustness to target localization or control errors. We
leverage the Karush-Kuhn-Tucker (KKT) conditions to identify the compatibility
between planning objectives and the visual sensing constraint during the
planned maneuver. Furthermore, we experimentally identify the corresponding
boundary conditions that maximizes the spatio-temporal target visibility during
the perching maneuver. The proposed approach works on-board in real-time with
significant computational constraints relying exclusively on cameras and an
Inertial Measurement Unit (IMU). Experimental results validate the proposed
approach and shows the higher success rate as well as increased target
interception precision and accuracy with respect to a one-shot planning
approach, while still retaining aggressive capabilities with flight envelopes
that include large excursions from the hover position on inclined surfaces up
to 90, angular speeds up to 750~deg/s, and accelerations up to
10~m/s
DPC-Net: Deep Pose Correction for Visual Localization
We present a novel method to fuse the power of deep networks with the
computational efficiency of geometric and probabilistic localization
algorithms. In contrast to other methods that completely replace a classical
visual estimator with a deep network, we propose an approach that uses a
convolutional neural network to learn difficult-to-model corrections to the
estimator from ground-truth training data. To this end, we derive a novel loss
function for learning SE(3) corrections based on a matrix Lie groups approach,
with a natural formulation for balancing translation and rotation errors. We
use this loss to train a Deep Pose Correction network (DPC-Net) that predicts
corrections for a particular estimator, sensor and environment. Using the KITTI
odometry dataset, we demonstrate significant improvements to the accuracy of a
computationally-efficient sparse stereo visual odometry pipeline, that render
it as accurate as a modern computationally-intensive dense estimator. Further,
we show how DPC-Net can be used to mitigate the effect of poorly calibrated
lens distortion parameters.Comment: In IEEE Robotics and Automation Letters (RA-L) and presented at the
IEEE International Conference on Robotics and Automation (ICRA'18), Brisbane,
Australia, May 21-25, 201
High-Performance Testbed for Vision-Aided Autonomous Navigation for Quadrotor UAVs in Cluttered Environments
This thesis presents the development of an aerial robotic testbed based on Robot Operating System (ROS). The purpose of this high-performance testbed is to develop a system capable of performing robust navigation tasks using vision tools such as a stereo camera. While ensuring the computation of robot odometery, the system is also capable of sensing the environment using the same stereo camera. Hence, all the navigation tasks are performed using a stereo camera and an inertial measurement unit (IMU) as the main sensor suite. ROS is used as a framework for software integration due to its capabilities to provide efficient communication and sensor interfaces. Moreover, it also allows us to use C++ which is efficient in performance especially on embedded platforms. Combining together ROS and C++ provides the necessary computation efficiency and tools to handle fast, real-time image processing and planning which are the vital parts of navigation and obstacle avoidance on such scale. The main application of this work revolves around proposing a real-time and efficient way to demonstrate vision-based navigation in UAVs. The proposed approach is developed for a quadrotor UAV which is capable of performing defensive maneuvers in case any obstacles are in its way, while constantly moving towards a user-defined final destination. Stereo depth computation adds a third axis to a two dimensional image coordinate frame. This can be referred to as the depth image space or depth image coordinate frame. The idea of planning in this frame of reference is utilized along with certain precomputed action primitives. The formulation of these action primitives leads to a hybrid control law for feasible trajectory generation. Further, a proof of stability of this system is also presented. The proposed approach keeps in view the fact that while performing fast maneuvers and obstacle avoidance simultaneously, many of the standard optimization approaches might not work in real-time on-board due to time and resource limitations. This leads to a need for the development of real-time techniques for vision-based autonomous navigation
- …