2 research outputs found
A Lightweight and Accurate Localization Algorithm Using Multiple Inertial Measurement Units
This paper proposes a novel inertial-aided localization approach by fusing
information from multiple inertial measurement units (IMUs) and exteroceptive
sensors. IMU is a low-cost motion sensor which provides measurements on angular
velocity and gravity compensated linear acceleration of a moving platform, and
widely used in modern localization systems. To date, most existing
inertial-aided localization methods exploit only one single IMU. While the
single-IMU localization yields acceptable accuracy and robustness for different
use cases, the overall performance can be further improved by using multiple
IMUs. To this end, we propose a lightweight and accurate algorithm for fusing
measurements from multiple IMUs and exteroceptive sensors, which is able to
obtain noticeable performance gain without incurring additional computational
cost. To achieve this, we first probabilistically map measurements from all
IMUs onto a virtual IMU. This step is performed by stochastic estimation with
least-square estimators and probabilistic marginalization of inter-IMU
rotational accelerations. Subsequently, the propagation model for both state
and error state of the virtual IMU is also derived, which enables the use of
the classical filter-based or optimization-based sensor fusion algorithms for
localization. Finally, results from both simulation and real-world tests are
provided, which demonstrate that the proposed algorithm outperforms competing
algorithms by noticeable margins.Comment: Accepted to IEEE Robotics and Automation Letters (RA-L), to appea
MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera Visual-Inertial Navigation System
As cameras and inertial sensors are becoming ubiquitous in mobile devices and
robots, it holds great potential to design visual-inertial navigation systems
(VINS) for efficient versatile 3D motion tracking which utilize any (multiple)
available cameras and inertial measurement units (IMUs) and are resilient to
sensor failures or measurement depletion. To this end, rather than the standard
VINS paradigm using a minimal sensing suite of a single camera and IMU, in this
paper we design a real-time consistent multi-IMU multi-camera (MIMC)-VINS
estimator that is able to seamlessly fuse multi-modal information from an
arbitrary number of uncalibrated cameras and IMUs. Within an efficient
multi-state constraint Kalman filter (MSCKF) framework, the proposed MIMC-VINS
algorithm optimally fuses asynchronous measurements from all sensors, while
providing smooth, uninterrupted, and accurate 3D motion tracking even if some
sensors fail. The key idea of the proposed MIMC-VINS is to perform high-order
on-manifold state interpolation to efficiently process all available visual
measurements without increasing the computational burden due to estimating
additional sensors' poses at asynchronous imaging times. In order to fuse the
information from multiple IMUs, we propagate a joint system consisting of all
IMU states while enforcing rigid-body constraints between the IMUs during the
filter update stage. Lastly, we estimate online both spatiotemporal extrinsic
and visual intrinsic parameters to make our system robust to errors in prior
sensor calibration. The proposed system is extensively validated in both
Monte-Carlo simulations and real-world experiments.Comment: 20 pages, 10 figures, 13 table