1,605 research outputs found
Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data
In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV’s absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing
Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments
Existing simultaneous localization and mapping (SLAM) algorithms are not
robust in challenging low-texture environments because there are only few
salient features. The resulting sparse or semi-dense map also conveys little
information for motion planning. Though some work utilize plane or scene layout
for dense map regularization, they require decent state estimation from other
sources. In this paper, we propose real-time monocular plane SLAM to
demonstrate that scene understanding could improve both state estimation and
dense mapping especially in low-texture environments. The plane measurements
come from a pop-up 3D plane model applied to each single image. We also combine
planes with point based SLAM to improve robustness. On a public TUM dataset,
our algorithm generates a dense semantic 3D model with pixel depth error of 6.2
cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our
method creates a much better 3D model with state estimation error of 0.67%.Comment: International Conference on Intelligent Robots and Systems (IROS)
201
Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters
Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more
cameras are mounted on actuated mechanisms such as a gimbal. Existing methods
for DCC calibration rely on joint angle measurements to resolve the
time-varying transformation between the dynamic and static camera. This
information is usually provided by motor encoders, however, joint angle
measurements are not always readily available on off-the-shelf mechanisms. In
this paper, we present an encoderless approach for DCC calibration which
simultaneously estimates the kinematic parameters of the transformation chain
as well as the unknown joint angles. We also demonstrate the integration of an
encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show
the extensions required in order to perform simultaneous online estimation of
the joint angles and vehicle localization state. The proposed calibration
approach is validated both in simulation and on a physical DCC composed of a
2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the
calibrated mechanism integrated into the OKVIS VIO package, and demonstrate
successful online joint angle estimation while maintaining localization
accuracy that is comparable to a standard static multi-camera configuration.Comment: ICRA 201
Magnetic-Visual Sensor Fusion-based Dense 3D Reconstruction and Localization for Endoscopic Capsule Robots
Reliable and real-time 3D reconstruction and localization functionality is a
crucial prerequisite for the navigation of actively controlled capsule
endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic
technology for use in the gastrointestinal (GI) tract. In this study, we
propose a fully dense, non-rigidly deformable, strictly real-time,
intraoperative map fusion approach for actively controlled endoscopic capsule
robot applications which combines magnetic and vision-based localization, with
non-rigid deformations based frame-to-model map fusion. The performance of the
proposed method is demonstrated using four different ex-vivo porcine stomach
models. Across different trajectories of varying speed and complexity, and four
different endoscopic cameras, the root mean square surface reconstruction
errors 1.58 to 2.17 cm.Comment: submitted to IROS 201
- …