2,028 research outputs found
VIO-UWB-Based Collaborative Localization and Dense Scene Reconstruction within Heterogeneous Multi-Robot Systems
Effective collaboration in multi-robot systems requires accurate and robust
estimation of relative localization: from cooperative manipulation to
collaborative sensing, and including cooperative exploration or cooperative
transportation. This paper introduces a novel approach to collaborative
localization for dense scene reconstruction in heterogeneous multi-robot
systems comprising ground robots and micro-aerial vehicles (MAVs). We solve the
problem of full relative pose estimation without sliding time windows by
relying on UWB-based ranging and Visual Inertial Odometry (VIO)-based egomotion
estimation for localization, while exploiting lidars onboard the ground robots
for full relative pose estimation in a single reference frame. During
operation, the rigidity eigenvalue provides feedback to the system. To tackle
the challenge of path planning and obstacle avoidance of MAVs in GNSS-denied
environments, we maintain line-of-sight between ground robots and MAVs. Because
lidars capable of dense reconstruction have limited FoV, this introduces new
constraints to the system. Therefore, we propose a novel formulation with a
variant of the Dubins multiple traveling salesman problem with neighborhoods
(DMTSPN) where we include constraints related to the limited FoV of the ground
robots. Our approach is validated with simulations and experiments with real
robots for the different parts of the system
Keyframe-based Direct Thermal-Inertial Odometry
This paper proposes an approach for fusing direct radiometric data from a
thermal camera with inertial measurements to extend the robotic capabilities of
aerial robots for navigation in GPS-denied and visually degraded environments
in the conditions of darkness and in the presence of airborne obscurants such
as dust, fog and smoke. An optimization based approach is developed that
jointly minimizes the re-projection error of 3D landmarks and inertial
measurement errors. The developed solution is extensively verified against both
ground-truth in an indoor laboratory setting, as well as inside an underground
mine under severely visually degraded conditions.Comment: 7 pages, 8 figures, Accepted at International Conference on Robotics
and Automation (ICRA) 201
Aerial-Ground collaborative sensing: Third-Person view for teleoperation
Rapid deployment and operation are key requirements in time critical
application, such as Search and Rescue (SaR). Efficiently teleoperated ground
robots can support first-responders in such situations. However, first-person
view teleoperation is sub-optimal in difficult terrains, while a third-person
perspective can drastically increase teleoperation performance. Here, we
propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide
third-person perspective to ground robots. While our approach is based on local
visual servoing, it further leverages the global localization of several ground
robots to seamlessly transfer between these ground robots in GPS-denied
environments. Therewith one MAV can support multiple ground robots on a demand
basis. Furthermore, our system enables different visual detection regimes, and
enhanced operability, and return-home functionality. We evaluate our system in
real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR
Build Your Own Visual-Inertial Drone: A Cost-Effective and Open-Source Autonomous Drone
This paper describes an approach to building a cost-effective and research
grade visual-inertial odometry aided vertical taking-off and landing (VTOL)
platform. We utilize an off-the-shelf visual-inertial sensor, an onboard
computer, and a quadrotor platform that are factory-calibrated and
mass-produced, thereby sharing similar hardware and sensor specifications
(e.g., mass, dimensions, intrinsic and extrinsic of camera-IMU systems, and
signal-to-noise ratio). We then perform a system calibration and identification
enabling the use of our visual-inertial odometry, multi-sensor fusion, and
model predictive control frameworks with the off-the-shelf products. This
implies that we can partially avoid tedious parameter tuning procedures for
building a full system. The complete system is extensively evaluated both
indoors using a motion capture system and outdoors using a laser tracker while
performing hover and step responses, and trajectory following tasks in the
presence of external wind disturbances. We achieve root-mean-square (RMS) pose
errors between a reference and actual trajectories of 0.036m, while performing
hover. We also conduct relatively long distance flight (~180m) experiments on a
farm site and achieve 0.82% drift error of the total distance flight. This
paper conveys the insights we acquired about the platform and sensor module and
returns to the community as open-source code with tutorial documentation.Comment: 21 pages, 10 figures, accepted to IEEE Robotics & Automation Magazin
Recent Developments in Aerial Robotics: A Survey and Prototypes Overview
In recent years, research and development in aerial robotics (i.e., unmanned
aerial vehicles, UAVs) has been growing at an unprecedented speed, and there is
a need to summarize the background, latest developments, and trends of UAV
research. Along with a general overview on the definition, types, categories,
and topics of UAV, this work describes a systematic way to identify 1,318
high-quality UAV papers from more than thirty thousand that have been appeared
in the top journals and conferences. On top of that, we provide a bird's-eye
view of UAV research since 2001 by summarizing various statistical information,
such as the year, type, and topic distribution of the UAV papers. We make our
survey list public and believe that the list can not only help researchers
identify, study, and compare their work, but is also useful for understanding
research trends in the field. From our survey results, we find there are many
types of UAV, and to the best of our knowledge, no literature has attempted to
summarize all types in one place. With our survey list, we explain the types
within our survey and outline the recent progress of each. We believe this
summary can enhance readers' understanding on the UAVs and inspire researchers
to propose new methods and new applications.Comment: 14 pages, 16 figures, typos correcte
Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments
This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.Peer ReviewedPostprint (published version
Estimating Metric Poses of Dynamic Objects Using Monocular Visual-Inertial Fusion
A monocular 3D object tracking system generally has only up-to-scale pose
estimation results without any prior knowledge of the tracked object. In this
paper, we propose a novel idea to recover the metric scale of an arbitrary
dynamic object by optimizing the trajectory of the objects in the world frame,
without motion assumptions. By introducing an additional constraint in the time
domain, our monocular visual-inertial tracking system can obtain continuous six
degree of freedom (6-DoF) pose estimation without scale ambiguity. Our method
requires neither fixed multi-camera nor depth sensor settings for scale
observability, instead, the IMU inside the monocular sensing suite provides
scale information for both camera itself and the tracked object. We build the
proposed system on top of our monocular visual-inertial system (VINS) to obtain
accurate state estimation of the monocular camera in the world frame. The whole
system consists of a 2D object tracker, an object region-based visual bundle
adjustment (BA), VINS and a correlation analysis-based metric scale estimator.
Experimental comparisons with ground truth demonstrate the tracking accuracy of
our 3D tracking performance while a mobile augmented reality (AR) demo shows
the feasibility of potential applications.Comment: IROS 201
The Aqualoc Dataset: Towards Real-Time Underwater Localization from a Visual-Inertial-Pressure Acquisition System
This paper presents a new underwater dataset acquired from a
visual-inertial-pressure acquisition system and meant to be used to benchmark
visual odometry, visual SLAM and multi-sensors SLAM solutions. The dataset is
publicly available and contains ground-truth trajectories for evaluation
Benchmarking Tether-based UAV Motion Primitives
This paper proposes and benchmarks two tether-based motion primitives for
tethered UAVs to execute autonomous flight with proprioception only. Tethered
UAVs have been studied mainly due to power and safety considerations. Tether is
either not included in the UAV motion (treated same as free-flying UAV) or only
in terms of station-keeping and high-speed steady flight. However, feedback
from and control over the tether configuration could be utilized as a set of
navigational tools for autonomous flight, especially in GPS-denied environments
and without vision-based exteroception. In this work, two tether-based motion
primitives are proposed, which can enable autonomous flight of a tethered UAV.
The proposed motion primitives are implemented on a physical tethered UAV for
autonomous path execution with motion capture ground truth. The navigational
performance is quantified and compared. The proposed motion primitives make
tethered UAV a mobile and safe autonomous robot platform. The benchmarking
results suggest appropriate usage of the two motion primitives for tethered
UAVs with different path plans.Comment: Submitted to 2019 IEEE International Conference on Safety, Security,
and Rescue Robotics (SSRR2019
The Blackbird Dataset: A large-scale dataset for UAV perception in aggressive flight
The Blackbird unmanned aerial vehicle (UAV) dataset is a large-scale,
aggressive indoor flight dataset collected using a custom-built quadrotor
platform for use in evaluation of agile perception.Inspired by the potential of
future high-speed fully-autonomous drone racing, the Blackbird dataset contains
over 10 hours of flight data from 168 flights over 17 flight trajectories and 5
environments at velocities up to . Each flight includes sensor data
from 120Hz stereo and downward-facing photorealistic virtual cameras, 100Hz
IMU, motor speed sensors, and 360Hz millimeter-accurate motion
capture ground truth. Camera images for each flight were photorealistically
rendered using FlightGoggles across a variety of environments to facilitate
easy experimentation of high performance perception algorithms. The dataset is
available for download at http://blackbird-dataset.mit.edu/Comment: Accepted to appear at ISER 201
- …