19 research outputs found
Towards consistent visual-inertial navigation
Visual-inertial navigation systems (VINS) have prevailed in various applications, in part because of the complementary sensing capabilities and decreasing costs as well as sizes. While many of the current VINS algorithms undergo inconsistent estimation, in this paper we introduce a new extended Kalman filter (EKF)-based approach towards consistent estimates. To this end, we impose both state-transition and obervability constraints in computing EKF Jacobians so that the resulting linearized system can best approximate the underlying nonlinear system. Specifically, we enforce the propagation Jacobian to obey the semigroup property, thus being an appropriate state-transition matrix. This is achieved by parametrizing the orientation error state in the global, instead of local, frame of reference, and then evaluating the Jacobian at the propagated, instead of the updated, state estimates. Moreover, the EKF linearized system ensures correct observability by projecting the most-accurate measurement Jacobian onto the observable subspace so that no spurious information is gained. The proposed algorithm is validated by both Monte-Carlo simulation and real-world experimental tests.United States. Office of Naval Research (N00014-12-1- 0093, N00014-10-1-0936, N00014-11-1-0688, and N00014-13-1-0588)National Science Foundation (U.S.) (Grant IIS-1318392
Integration of Absolute Orientation Measurements in the KinectFusion Reconstruction pipeline
In this paper, we show how absolute orientation measurements provided by
low-cost but high-fidelity IMU sensors can be integrated into the KinectFusion
pipeline. We show that integration improves both runtime, robustness and
quality of the 3D reconstruction. In particular, we use this orientation data
to seed and regularize the ICP registration technique. We also present a
technique to filter the pairs of 3D matched points based on the distribution of
their distances. This filter is implemented efficiently on the GPU. Estimating
the distribution of the distances helps control the number of iterations
necessary for the convergence of the ICP algorithm. Finally, we show
experimental results that highlight improvements in robustness, a speed-up of
almost 12%, and a gain in tracking quality of 53% for the ATE metric on the
Freiburg benchmark.Comment: CVPR Workshop on Visual Odometry and Computer Vision Applications
Based on Location Clues 201
GPS-VIO Fusion with Online Rotational Calibration
Accurate global localization is crucial for autonomous navigation and
planning. To this end, various GPS-aided Visual-Inertial Odometry (GPS-VIO)
fusion algorithms are proposed in the literature. This paper presents a novel
GPS-VIO system that is able to significantly benefit from the online
calibration of the rotational extrinsic parameter between the GPS reference
frame and the VIO reference frame. The behind reason is this parameter is
observable. This paper provides novel proof through nonlinear observability
analysis. We also evaluate the proposed algorithm extensively on diverse
platforms, including flying UAV and driving vehicle. The experimental results
support the observability analysis and show increased localization accuracy in
comparison to state-of-the-art (SOTA) tightly-coupled algorithms.Comment: Accepted by ICRA 202
Keyframe-based visual–inertial odometry using nonlinear optimization
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
InGVIO: A Consistent Invariant Filter for Fast and High-Accuracy GNSS-Visual-Inertial Odometry
Combining Global Navigation Satellite System (GNSS) with visual and inertial
sensors can give smooth pose estimation without drifting in geographical
coordinates. The fusion system gradually degrades to Visual-Inertial Odometry
(VIO) with the number of satellites decreasing, which guarantees robust global
navigation in GNSS unfriendly environments. In this letter, we propose an
open-sourced invariant filter-based platform, InGVIO, to tightly fuse
monocular/stereo visual-inertial measurements, along with raw data from GNSS,
i.e. pseudo ranges and Doppler shifts. InGVIO gives highly competitive results
in terms of accuracy and computational load compared to current graph-based and
`naive' EKF-based algorithms. Thanks to our proposed key-frame marginalization
strategies, the baseline for triangulation is large although only a few cloned
poses are kept. Besides, landmarks are anchored to a single cloned pose to fit
the nonlinear log-error form of the invariant filter while achieving decoupled
propagation with IMU states. Moreover, we exploit the infinitesimal symmetries
of the system, which gives equivalent results for the pattern of degenerate
motions and the structure of unobservable subspaces compared to our previous
work using observability analysis. We show that the properly-chosen invariant
error captures such symmetries and has intrinsic consistency properties. InGVIO
is tested on both open datasets and our proposed fixed-wing datasets with
variable levels of difficulty. The latter, to the best of our knowledge, are
the first datasets open-sourced to the community on a fixed-wing aircraft with
raw GNSS.Comment: 8 pages, 8 figures; manuscript will be submitted to IEEE RA-L for
possible publicatio
GPS-VIO Fusion with Online Rotational Calibration
Accurate global localization is crucial for autonomous navigation and
planning. To this end, various GPS-aided Visual-Inertial Odometry (GPS-VIO)
fusion algorithms are proposed in the literature. This paper presents a novel
GPS-VIO system that is able to significantly benefit from the online
calibration of the rotational extrinsic parameter between the GPS reference
frame and the VIO reference frame. The behind reason is this parameter is
observable. This paper provides novel proof through nonlinear observability
analysis. We also evaluate the proposed algorithm extensively on diverse
platforms, including flying UAV and driving vehicle. The experimental results
support the observability analysis and show increased localization accuracy in
comparison to state-of-the-art (SOTA) tightly-coupled algorithms
Cooperative Visual-Inertial Sensor Fusion: Fundamental Equations
International audienceThis paper provides a new theoretical and basic result in the framework of cooperative visual-inertial sensor fusion. Specifically, the case of two aerial vehicles is investigated. Each vehicle is equipped with inertial sensors (accelerometer and gyroscope) and with a monocular camera. By using the monocular camera, each vehicle can observe the other vehicle. No additional camera observations (e.g., of external point features in the environment) are considered. First, the entire observable state is analytically derived. This state includes the relative position between the two aerial vehicles (which includes the absolute scale), the relative velocity and the three Euler angles that express the rotation between the two vehicle frames. Then, the basic equations that describe this system are analytically obtained. In other words, both the dynamics of the observable state and all the camera observations are expressed only in terms of the components of the observable state and in terms of the inertial measurements. These are the fundamental equations that fully characterize the problem of fusing visual and inertial data in the cooperative case. The last part of the paper describes the use of these equations to achieve the state estimation through an EKF. In particular, a simple manner to limit communication among the vehicles is discussed. Results obtained through simulations show the performance of the proposed solution, and in particular how it is affected by limiting the communication between the two vehicles