333 research outputs found
Keyframe-based visual–inertial odometry using nonlinear optimization
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
Multi-Antenna Vision-and-Inertial-Aided CDGNSS for Micro Aerial Vehicle Pose Estimation
A system is presented for multi-antenna carrier phase differential GNSS (CDGNSS)-based pose (position and orientation) estimation aided by monocular visual measurements and a smartphone-grade inertial sensor. The system is designed for micro aerial vehicles, but can be applied generally for low-cost, lightweight, high-accuracy, geo-referenced pose estimation. Visual and inertial measurements enable robust operation despite GNSS degradation by constraining uncertainty in the dynamics propagation, which improves fixed-integer CDGNSS availability and reliability in areas with limited sky visibility. No prior work has demonstrated an increased CDGNSS integer fixing rate when incorporating visual measurements with smartphone-grade inertial sensing. A central pose estimation filter receives measurements from separate CDGNSS position and attitude estimators, visual feature measurements based on the ROVIO measurement model, and inertial measurements. The filter's pose estimates are fed back as a prior for CDGNSS integer fixing. A performance analysis under both simulated and real-world GNSS degradation shows that visual measurements greatly increase the availability and accuracy of low-cost inertial-aided CDGNSS pose estimation.Aerospace Engineering and Engineering Mechanic
Depth-Camera-Aided Inertial Navigation Utilizing Directional Constraints.
This paper presents a practical yet effective solution for integrating an RGB-D camera and an inertial sensor to handle the depth dropouts that frequently happen in outdoor environments, due to the short detection range and sunlight interference. In depth drop conditions, only the partial 5-degrees-of-freedom pose information (attitude and position with an unknown scale) is available from the RGB-D sensor. To enable continuous fusion with the inertial solutions, the scale ambiguous position is cast into a directional constraint of the vehicle motion, which is, in essence, an epipolar constraint in multi-view geometry. Unlike other visual navigation approaches, this can effectively reduce the drift in the inertial solutions without delay or under small parallax motion. If a depth image is available, a window-based feature map is maintained to compute the RGB-D odometry, which is then fused with inertial outputs in an extended Kalman filter framework. Flight results from the indoor and outdoor environments, as well as public datasets, demonstrate the improved navigation performance of the proposed approach
SPINS: Structure Priors aided Inertial Navigation System
Although Simultaneous Localization and Mapping (SLAM) has been an active
research topic for decades, current state-of-the-art methods still suffer from
instability or inaccuracy due to feature insufficiency or its inherent
estimation drift, in many civilian environments. To resolve these issues, we
propose a navigation system combing the SLAM and prior-map-based localization.
Specifically, we consider additional integration of line and plane features,
which are ubiquitous and more structurally salient in civilian environments,
into the SLAM to ensure feature sufficiency and localization robustness. More
importantly, we incorporate general prior map information into the SLAM to
restrain its drift and improve the accuracy. To avoid rigorous association
between prior information and local observations, we parameterize the prior
knowledge as low dimensional structural priors defined as relative
distances/angles between different geometric primitives. The localization is
formulated as a graph-based optimization problem that contains
sliding-window-based variables and factors, including IMU, heterogeneous
features, and structure priors. We also derive the analytical expressions of
Jacobians of different factors to avoid the automatic differentiation overhead.
To further alleviate the computation burden of incorporating structural prior
factors, a selection mechanism is adopted based on the so-called information
gain to incorporate only the most effective structure priors in the graph
optimization. Finally, the proposed framework is extensively tested on
synthetic data, public datasets, and, more importantly, on the real UAV flight
data obtained from a building inspection task. The results show that the
proposed scheme can effectively improve the accuracy and robustness of
localization for autonomous robots in civilian applications.Comment: 14 pages, 14 figure
- …