Gaussian Process Preintegration for Inertial-Aided Navigation Systems

Abstract

University of Technology Sydney. Faculty of Engineering and Information Technology.To perform any degree of autonomy, a system needs to localise itself, generally requiring knowledge about its environment. While satellite technologies, like GPS or Galileo, allow individuals to navigate throughout the world, the level of accuracy of such systems, and the necessity to have a direct view of the sky, do not match the precision and robustness requirements needed to deploy robots in the real world. To overcome these limitations, roboticists developed localisation and mapping algorithms traditionally based on camera images or radar/LiDAR data. Across the last two decades, Inertial Measurement Units (IMUs) became ubiquitous. Thus, LiDAR-inertial and visual-inertial pose estimation algorithms represent now the majority of the state estimation literature. Preintegration became a standard method to aggregate inertial measurement units (IMUs) readings into pseudo-measurements for navigation systems. This thesis presents a novel preintegration theory that leverages data-driven continuous representations of the inertial data to perform analytical inference of the signal integrals. The proposed method probabilistically infers the pseudo-measurements, called Gaussian Preintegrated Measurements (GPMs), over any time interval, using Gaussian Process (GP) regression to model the IMU measurements and leveraging the application of linear operators to the GP covariance kernels. Thus, the GPMs do not rely on any explicit motion-model. This thesis presents two inertial-aided systems that leverage the GPMs in offline batch-optimisation algorithms. The first one is a framework called IN2LAAMA for . The proposed method addresses the issue of present in most of today's LiDARs' data thoroughly by using GPMs for each of the LiDAR points. The second GPM application is an event-based visual-inertial odometry method that uses lines to represent the environment. Event-cameras generate highly asynchronous streams of events that are individually triggered by each of the camera pixels upon illumination changes. Our framework, called IDOL for - , estimates the system's pose as well as the position of 3D lines in the environment by considering the camera events in the framework's cost function individually (no aggregation in image-like data). The GPMs allow for the continuous characterisation of the system's trajectory, therefore accommodating the asynchronous nature of event-camera data. Extensive benchmarking of the GPMs is performed on simulated data. The performance of IN2LAAMA is thoroughly demonstrated throughout simulated and real-world experiments, both indoor and outdoor. Evaluations on public datasets show that IDOL performs at the same order of magnitude as current frame-based state-of-the-art visual-inertial odometry frameworks

    Similar works

    Full text

    thumbnail-image