16 research outputs found
Aerial-Ground collaborative sensing: Third-Person view for teleoperation
Rapid deployment and operation are key requirements in time critical
application, such as Search and Rescue (SaR). Efficiently teleoperated ground
robots can support first-responders in such situations. However, first-person
view teleoperation is sub-optimal in difficult terrains, while a third-person
perspective can drastically increase teleoperation performance. Here, we
propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide
third-person perspective to ground robots. While our approach is based on local
visual servoing, it further leverages the global localization of several ground
robots to seamlessly transfer between these ground robots in GPS-denied
environments. Therewith one MAV can support multiple ground robots on a demand
basis. Furthermore, our system enables different visual detection regimes, and
enhanced operability, and return-home functionality. We evaluate our system in
real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR
Autonomous Hybrid Ground/Aerial Mobility in Unknown Environments
Hybrid ground and aerial vehicles can possess distinct advantages over
ground-only or flight-only designs in terms of energy savings and increased
mobility. In this work we outline our unified framework for controls, planning,
and autonomy of hybrid ground/air vehicles. Our contribution is three-fold: 1)
We develop a control scheme for the control of passive two-wheeled hybrid
ground/aerial vehicles. 2) We present a unified planner for both rolling and
flying by leveraging differential flatness mappings. 3) We conduct experiments
leveraging mapping and global planning for hybrid mobility in unknown
environments, showing that hybrid mobility uses up to five times less energy
than flying only
Graph-Optimization base multi-sensor fusion for robust UAV pose estimation
ing accurate, high-rate pose estimates from
proprioceptive and/or exteroceptive measurements is the first step in the development of navigation
algorithms for agile mobile robots such as Unmanned Aerial Vehicles (UAVs). In this paper, we
propose a decoupled multi-sensor fusion approach that allows the combination of generic 6D
visual-inertial (VI) odometry poses and 3D globally referenced positions to infer the global 6D
pose of the robot in real-time. Our approach casts the fusion as a real-time alignment problem
between the local base frame of the VI odometry and the global base frame. The quasi-constant
alignment transformation that relates these coordinate systems is continuously updated employing
graph- based optimization with a sliding window. We evaluate the presented pose estimation method
on both simulated data and large outdoor experiments using a small UAV that is capable to run our
system onboard. Results are compared against different state-of-the-art sensor fusion frameworks,
revealing that the proposed approach is substantially more accurate than other decoupled fusion
strategies. We also demonstrate comparable results in relation with a finely tuned Extended Kalman
Filter that fuses visual, inertial and GPS measurements in a coupled way and show that our approach
is generic enough to deal with different input sources in
ner, as well as
able to run in real-time