155 research outputs found
Accurate Monocular Visual-inertial SLAM using a Map-assisted EKF Approach
This paper presents a novel tightly-coupled monocular visual-inertial
Simultaneous Localization and Mapping algorithm, which provides accurate and
robust localization within the globally consistent map in real time on a
standard CPU. This is achieved by firstly performing the visual-inertial
extended kalman filter(EKF) to provide motion estimate at a high rate. However
the filter becomes inconsistent due to the well known linearization issues. So
we perform a keyframe-based visual-inertial bundle adjustment to improve the
consistency and accuracy of the system. In addition, a loop closure detection
and correction module is also added to eliminate the accumulated drift when
revisiting an area. Finally, the optimized motion estimates and map are fed
back to the EKF-based visual-inertial odometry module, thus the inconsistency
and estimation error of the EKF estimator are reduced. In this way, the system
can continuously provide reliable motion estimates for the long-term operation.
The performance of the algorithm is validated on public datasets and real-world
experiments, which proves the superiority of the proposed algorithm.Comment: 12 pages, 10 figure
Keyframe-based visual–inertial odometry using nonlinear optimization
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
Modeling Varying Camera-IMU Time Offset in Optimization-Based Visual-Inertial Odometry
Combining cameras and inertial measurement units (IMUs) has been proven
effective in motion tracking, as these two sensing modalities offer
complementary characteristics that are suitable for fusion. While most works
focus on global-shutter cameras and synchronized sensor measurements,
consumer-grade devices are mostly equipped with rolling-shutter cameras and
suffer from imperfect sensor synchronization. In this work, we propose a
nonlinear optimization-based monocular visual inertial odometry (VIO) with
varying camera-IMU time offset modeled as an unknown variable. Our approach is
able to handle the rolling-shutter effects and imperfect sensor synchronization
in a unified way. Additionally, we introduce an efficient algorithm based on
dynamic programming and red-black tree to speed up IMU integration over
variable-length time intervals during the optimization. An uncertainty-aware
initialization is also presented to launch the VIO robustly. Comparisons with
state-of-the-art methods on the Euroc dataset and mobile phone data are shown
to validate the effectiveness of our approach.Comment: European Conference on Computer Vision 201
An Underwater SLAM System using Sonar, Visual, Inertial, and Depth Sensor
This paper presents a novel tightly-coupled keyframe-based Simultaneous
Localization and Mapping (SLAM) system with loop-closing and relocalization
capabilities targeted for the underwater domain. Our previous work, SVIn,
augmented the state-of-the-art visual-inertial state estimation package OKVIS
to accommodate acoustic data from sonar in a non-linear optimization-based
framework. This paper addresses drift and loss of localization -- one of the
main problems affecting other packages in underwater domain -- by providing the
following main contributions: a robust initialization method to refine scale
using depth measurements, a fast preprocessing step to enhance the image
quality, and a real-time loop-closing and relocalization method using bag of
words (BoW). An additional contribution is the addition of depth measurements
from a pressure sensor to the tightly-coupled optimization formulation.
Experimental results on datasets collected with a custom-made underwater sensor
suite and an autonomous underwater vehicle from challenging underwater
environments with poor visibility demonstrate performance never achieved before
in terms of accuracy and robustness
An Open Source, Fiducial Based, Visual-Inertial Motion Capture System
Many robotic tasks rely on the accurate localization of moving objects within
a given workspace. This information about the objects' poses and velocities are
used for control,motion planning, navigation, interaction with the environment
or verification. Often motion capture systems are used to obtain such a state
estimate. However, these systems are often costly, limited in workspace size
and not suitable for outdoor usage. Therefore, we propose a lightweight and
easy to use, visual-inertial Simultaneous Localization and Mapping approach
that leverages cost-efficient, paper printable artificial landmarks, socalled
fiducials. Results show that by fusing visual and inertial data, the system
provides accurate estimates and is robust against fast motions and changing
lighting conditions. Tight integration of the estimation of sensor and fiducial
pose as well as extrinsics ensures accuracy, map consistency and avoids the
requirement for precalibration. By providing an open source implementation and
various datasets, partially with ground truth information, we enable community
members to run, test, modify and extend the system either using these datasets
or directly running the system on their own robotic setups.Comment: To appear in The International Conference on Information Fusion
(FUSION) 201
Visual-Inertial Odometry of Aerial Robots
Visual-Inertial odometry (VIO) is the process of estimating the state (pose
and velocity) of an agent (e.g., an aerial robot) by using only the input of
one or more cameras plus one or more Inertial Measurement Units (IMUs) attached
to it. VIO is the only viable alternative to GPS and lidar-based odometry to
achieve accurate state estimation. Since both cameras and IMUs are very cheap,
these sensor types are ubiquitous in all today's aerial robots.Comment: Accepted in the Encyclopedia of Robotics, Springe
Monocular visual-inertial SLAM algorithm combined with wheel speed anomaly detection
To address the weak observability of monocular visual-inertial odometers on
ground-based mobile robots, this paper proposes a monocular inertial SLAM
algorithm combined with wheel speed anomaly detection. The algorithm uses a
wheel speed odometer pre-integration method to add the wheel speed measurement
to the least-squares problem in a tightly coupled manner. For abnormal motion
situations, such as skidding and abduction, this paper adopts the Mecanum
mobile chassis control method, based on torque control. This method uses the
motion constraint error to estimate the reliability of the wheel speed
measurement. At the same time, in order to prevent incorrect chassis speed
measurements from negatively influencing robot pose estimation, this paper uses
three methods to detect abnormal chassis movement and analyze chassis movement
status in real time. When the chassis movement is determined to be abnormal,
the wheel odometer pre-integration measurement of the current frame is removed
from the state estimation equation, thereby ensuring the accuracy and
robustness of the state estimation. Experimental results show that the accuracy
and robustness of the method in this paper are better than those of a monocular
visual-inertial odometer
A Survey of Simultaneous Localization and Mapping with an Envision in 6G Wireless Networks
Simultaneous Localization and Mapping (SLAM) achieves the purpose of
simultaneous positioning and map construction based on self-perception. The
paper makes an overview in SLAM including Lidar SLAM, visual SLAM, and their
fusion. For Lidar or visual SLAM, the survey illustrates the basic type and
product of sensors, open source system in sort and history, deep learning
embedded, the challenge and future. Additionally, visual inertial odometry is
supplemented. For Lidar and visual fused SLAM, the paper highlights the
multi-sensors calibration, the fusion in hardware, data, task layer. The open
question and forward thinking with an envision in 6G wireless networks end the
paper. The contributions of this paper can be summarized as follows: the paper
provides a high quality and full-scale overview in SLAM. It's very friendly for
new researchers to hold the development of SLAM and learn it very obviously.
Also, the paper can be considered as a dictionary for experienced researchers
to search and find new interesting orientation.Comment: Comments are welcome and can be sent to email addresses in the pape
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
This paper presents ORB-SLAM3, the first system able to perform visual,
visual-inertial and multi-map SLAM with monocular, stereo and RGB-D cameras,
using pin-hole and fisheye lens models. The first main novelty is a
feature-based tightly-integrated visual-inertial SLAM system that fully relies
on Maximum-a-Posteriori (MAP) estimation, even during the IMU initialization
phase. The result is a system that operates robustly in real-time, in small and
large, indoor and outdoor environments, and is 2 to 5 times more accurate than
previous approaches. The second main novelty is a multiple map system that
relies on a new place recognition method with improved recall. Thanks to it,
ORB-SLAM3 is able to survive to long periods of poor visual information: when
it gets lost, it starts a new map that will be seamlessly merged with previous
maps when revisiting mapped areas. Compared with visual odometry systems that
only use information from the last few seconds, ORB-SLAM3 is the first system
able to reuse in all the algorithm stages all previous information. This allows
to include in bundle adjustment co-visible keyframes, that provide high
parallax observations boosting accuracy, even if they are widely separated in
time or if they come from a previous mapping session. Our experiments show
that, in all sensor configurations, ORB-SLAM3 is as robust as the best systems
available in the literature, and significantly more accurate. Notably, our
stereo-inertial SLAM achieves an average accuracy of 3.6 cm on the EuRoC drone
and 9 mm under quick hand-held motions in the room of TUM-VI dataset, a setting
representative of AR/VR scenarios. For the benefit of the community we make
public the source code
Stereo Visual Inertial LiDAR Simultaneous Localization and Mapping
Simultaneous Localization and Mapping (SLAM) is a fundamental task to mobile
and aerial robotics. LiDAR based systems have proven to be superior compared to
vision based systems due to its accuracy and robustness. In spite of its
superiority, pure LiDAR based systems fail in certain degenerate cases like
traveling through a tunnel. We propose Stereo Visual Inertial LiDAR (VIL) SLAM
that performs better on these degenerate cases and has comparable performance
on all other cases. VIL-SLAM accomplishes this by incorporating tightly-coupled
stereo visual inertial odometry (VIO) with LiDAR mapping and LiDAR enhanced
visual loop closure. The system generates loop-closure corrected 6-DOF LiDAR
poses in real-time and 1cm voxel dense maps near real-time. VIL-SLAM
demonstrates improved accuracy and robustness compared to state-of-the-art
LiDAR methods.Comment: Submitted to RA-L with IROS 2019 optio
- …