716 research outputs found
Mixed marker-based/marker-less visual odometry system for mobile robots
When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test
Robust position control of a tilt-wing quadrotor
This paper presents a robust position controller for a tilt-wing quadrotor to track desired trajectories under external wind and aerodynamic disturbances. Wind effects are modeled using Dryden model and are included in the dynamic model of the vehicle. Robust position control is achieved by introducing a disturbance observer which estimates the total disturbance acting on the system. In the design of the disturbance observer, the nonlinear terms which appear
in the dynamics of the aerial vehicle are also treated as disturbances and included in the total disturbance. Utilization of the disturbance observer implies a linear model with nominal parameters. Since the resulting dynamics are linear, only PID type simple controllers are designed for position and attitude
control. Simulations and experimental results show that the performance of the observer based position control system is quite satisfactory
Fast, Autonomous Flight in GPS-Denied and Cluttered Environments
One of the most challenging tasks for a flying robot is to autonomously
navigate between target locations quickly and reliably while avoiding obstacles
in its path, and with little to no a-priori knowledge of the operating
environment. This challenge is addressed in the present paper. We describe the
system design and software architecture of our proposed solution, and showcase
how all the distinct components can be integrated to enable smooth robot
operation. We provide critical insight on hardware and software component
selection and development, and present results from extensive experimental
testing in real-world warehouse environments. Experimental testing reveals that
our proposed solution can deliver fast and robust aerial robot autonomous
navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field
Robotic
Deep Drone Racing: From Simulation to Reality with Domain Randomization
Dynamically changing environments, unreliable state estimation, and operation
under severe resource constraints are fundamental challenges that limit the
deployment of small autonomous drones. We address these challenges in the
context of autonomous, vision-based drone racing in dynamic environments. A
racing drone must traverse a track with possibly moving gates at high speed. We
enable this functionality by combining the performance of a state-of-the-art
planning and control system with the perceptual awareness of a convolutional
neural network (CNN). The resulting modular system is both platform- and
domain-independent: it is trained in simulation and deployed on a physical
quadrotor without any fine-tuning. The abundance of simulated data, generated
via domain randomization, makes our system robust to changes of illumination
and gate appearance. To the best of our knowledge, our approach is the first to
demonstrate zero-shot sim-to-real transfer on the task of agile drone flight.
We extensively test the precision and robustness of our system, both in
simulation and on a physical platform, and show significant improvements over
the state of the art.Comment: Accepted as a Regular Paper to the IEEE Transactions on Robotics
Journal. arXiv admin note: substantial text overlap with arXiv:1806.0854
Aggressive Aerial Grasping using a Soft Drone with Onboard Perception
Contrary to the stunning feats observed in birds of prey, aerial manipulation
and grasping with flying robots still lack versatility and agility.
Conventional approaches using rigid manipulators require precise positioning
and are subject to large reaction forces at grasp, which limit performance at
high speeds. The few reported examples of aggressive aerial grasping rely on
motion capture systems, or fail to generalize across environments and grasp
targets. We describe the first example of a soft aerial manipulator equipped
with a fully onboard perception pipeline, capable of robustly localizing and
grasping visually and morphologically varied objects. The proposed system
features a novel passively closing tendon-actuated soft gripper that enables
fast closure at grasp, while compensating for position errors, complying to the
target-object morphology, and dampening reaction forces. The system includes an
onboard perception pipeline that combines a neural-network-based semantic
keypoint detector with a state-of-the-art robust 3D object pose estimator,
whose estimate is further refined using a fixed-lag smoother. The resulting
pose estimate is passed to a minimum-snap trajectory planner, tracked by an
adaptive controller that fully compensates for the added mass of the grasped
object. Finally, a finite-element-based controller determines optimal gripper
configurations for grasping. Rigorous experiments confirm that our approach
enables dynamic, aggressive, and versatile grasping. We demonstrate fully
onboard vision-based grasps of a variety of objects, in both indoor and outdoor
environments, and up to speeds of 2.0 m/s -- the fastest vision-based grasp
reported in the literature. Finally, we take a major step in expanding the
utility of our platform beyond stationary targets, by demonstrating
motion-capture-based grasps of targets moving up to 0.3 m/s, with relative
speeds up to 1.5 m/s
- …