14,343 research outputs found
Fast, Autonomous Flight in GPS-Denied and Cluttered Environments
One of the most challenging tasks for a flying robot is to autonomously
navigate between target locations quickly and reliably while avoiding obstacles
in its path, and with little to no a-priori knowledge of the operating
environment. This challenge is addressed in the present paper. We describe the
system design and software architecture of our proposed solution, and showcase
how all the distinct components can be integrated to enable smooth robot
operation. We provide critical insight on hardware and software component
selection and development, and present results from extensive experimental
testing in real-world warehouse environments. Experimental testing reveals that
our proposed solution can deliver fast and robust aerial robot autonomous
navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field
Robotic
Metrology Camera System of Prime Focus Spectrograph for Subaru Telescope
The Prime Focus Spectrograph (PFS) is a new optical/near-infrared multi-fiber
spectrograph designed for the prime focus of the 8.2m Subaru telescope. The
metrology camera system of PFS serves as the optical encoder of the COBRA fiber
motors for the configuring of fibers. The 380mm diameter aperture metrology
camera will locate at the Cassegrain focus of Subaru telescope to cover the
whole focal plane with one 50M pixel Canon CMOS sensor. The metrology camera is
designed to provide the fiber position information within 5{\mu}m error over
the 45cm focal plane. The positions of all fibers can be obtained within 1s
after the exposure is finished. This enables the overall fiber configuration to
be less than 2 minutes.Comment: 10 pages, 12 figures, SPIE Astronomical Telescopes and
Instrumentation 201
An Underwater SLAM System using Sonar, Visual, Inertial, and Depth Sensor
This paper presents a novel tightly-coupled keyframe-based Simultaneous
Localization and Mapping (SLAM) system with loop-closing and relocalization
capabilities targeted for the underwater domain. Our previous work, SVIn,
augmented the state-of-the-art visual-inertial state estimation package OKVIS
to accommodate acoustic data from sonar in a non-linear optimization-based
framework. This paper addresses drift and loss of localization -- one of the
main problems affecting other packages in underwater domain -- by providing the
following main contributions: a robust initialization method to refine scale
using depth measurements, a fast preprocessing step to enhance the image
quality, and a real-time loop-closing and relocalization method using bag of
words (BoW). An additional contribution is the addition of depth measurements
from a pressure sensor to the tightly-coupled optimization formulation.
Experimental results on datasets collected with a custom-made underwater sensor
suite and an autonomous underwater vehicle from challenging underwater
environments with poor visibility demonstrate performance never achieved before
in terms of accuracy and robustness
Robust Legged Robot State Estimation Using Factor Graph Optimization
Legged robots, specifically quadrupeds, are becoming increasingly attractive
for industrial applications such as inspection. However, to leave the
laboratory and to become useful to an end user requires reliability in harsh
conditions. From the perspective of state estimation, it is essential to be
able to accurately estimate the robot's state despite challenges such as uneven
or slippery terrain, textureless and reflective scenes, as well as dynamic
camera occlusions. We are motivated to reduce the dependency on foot contact
classifications, which fail when slipping, and to reduce position drift during
dynamic motions such as trotting. To this end, we present a factor graph
optimization method for state estimation which tightly fuses and smooths
inertial navigation, leg odometry and visual odometry. The effectiveness of the
approach is demonstrated using the ANYmal quadruped robot navigating in a
realistic outdoor industrial environment. This experiment included trotting,
walking, crossing obstacles and ascending a staircase. The proposed approach
decreased the relative position error by up to 55% and absolute position error
by 76% compared to kinematic-inertial odometry.Comment: 8 pages, 12 figures. Accepted to RA-L + IROS 2019, July 201
Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data
In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV’s absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing
- …