1,093 research outputs found
Invariant EKF Design for Scan Matching-aided Localization
Localization in indoor environments is a technique which estimates the
robot's pose by fusing data from onboard motion sensors with readings of the
environment, in our case obtained by scan matching point clouds captured by a
low-cost Kinect depth camera. We develop both an Invariant Extended Kalman
Filter (IEKF)-based and a Multiplicative Extended Kalman Filter (MEKF)-based
solution to this problem. The two designs are successfully validated in
experiments and demonstrate the advantage of the IEKF design
A Comprehensive Introduction of Visual-Inertial Navigation
In this article, a tutorial introduction to visual-inertial navigation(VIN)
is presented. Visual and inertial perception are two complementary sensing
modalities. Cameras and inertial measurement units (IMU) are the corresponding
sensors for these two modalities. The low cost and light weight of camera-IMU
sensor combinations make them ubiquitous in robotic navigation. Visual-inertial
Navigation is a state estimation problem, that estimates the ego-motion and
local environment of the sensor platform. This paper presents visual-inertial
navigation in the classical state estimation framework, first illustrating the
estimation problem in terms of state variables and system models, including
related quantities representations (Parameterizations), IMU dynamic and camera
measurement models, and corresponding general probabilistic graphical models
(Factor Graph). Secondly, we investigate the existing model-based estimation
methodologies, these involve filter-based and optimization-based frameworks and
related on-manifold operations. We also discuss the calibration of some
relevant parameters, also initialization of state of interest in
optimization-based frameworks. Then the evaluation and improvement of VIN in
terms of accuracy, efficiency, and robustness are discussed. Finally, we
briefly mention the recent development of learning-based methods that may
become alternatives to traditional model-based methods.Comment: 35 pages, 10 figure
Computationally-efficient visual inertial odometry for autonomous vehicle
This thesis presents the design, implementation, and validation of a novel nonlinearfiltering
based Visual Inertial Odometry (VIO) framework for robotic navigation in GPSdenied
environments. The system attempts to track the vehicle’s ego-motion at each time
instant while capturing the benefits of both the camera information and the Inertial Measurement
Unit (IMU). VIO demands considerable computational resources and processing
time, and this makes the hardware implementation quite challenging for micro- and nanorobotic
systems. In many cases, the VIO process selects a small subset of tracked features
to reduce the computational cost. VIO estimation also suffers from the inevitable accumulation
of error. This limitation makes the estimation gradually diverge and even fail to
track the vehicle trajectory over long-term operation. Deploying optimization for the entire
trajectory helps to minimize the accumulative errors, but increases the computational cost
significantly. The VIO hardware implementation can utilize a more powerful processor
and specialized hardware computing platforms, such as Field Programmable Gate Arrays,
Graphics Processing Units and Application-Specific Integrated Circuits, to accelerate the
execution. However, the computation still needs to perform identical computational steps
with similar complexity. Processing data at a higher frequency increases energy consumption
significantly. The development of advanced hardware systems is also expensive and
time-consuming. Consequently, the approach of developing an efficient algorithm will be
beneficial with or without hardware acceleration. The research described in this thesis
proposes multiple solutions to accelerate the visual inertial odometry computation while
maintaining a comparative estimation accuracy over long-term operation among state-ofthe-
art algorithms.
This research has resulted in three significant contributions. First, this research involved
the design and validation of a novel nonlinear filtering sensor-fusion algorithm using trifocal
tensor geometry and a cubature Kalman filter. The combination has handled the system
nonlinearity effectively, while reducing the computational cost and system complexity significantly.
Second, this research develops two solutions to address the error accumulation
issue. For standalone self-localization projects, the first solution applies a local optimization
procedure for the measurement update, which performs multiple corrections on a single
measurement to optimize the latest filter state and covariance. For larger navigation
projects, the second solution integrates VIO with additional pseudo-ranging measurements
between the vehicle and multiple beacons in order to bound the accumulative errors. Third,
this research develops a novel parallel-processing VIO algorithm to speed up the execution
using a multi-core CPU. This allows the distribution of the filtering computation on each
core to process and optimize each feature measurement update independently.
The performance of the proposed visual inertial odometry framework is evaluated using
publicly-available self-localization datasets, for comparison with some other open-source
algorithms. The results illustrate that a proposed VIO framework is able to improve the
VIO’s computational efficiency without the installation of specialized hardware computing
platforms and advanced software libraries
On-Manifold Preintegration for Real-Time Visual-Inertial Odometry
Current approaches for visual-inertial odometry (VIO) are able to attain
highly accurate state estimation via nonlinear optimization. However, real-time
optimization quickly becomes infeasible as the trajectory grows over time, this
problem is further emphasized by the fact that inertial measurements come at
high rate, hence leading to fast growth of the number of variables in the
optimization. In this paper, we address this issue by preintegrating inertial
measurements between selected keyframes into single relative motion
constraints. Our first contribution is a \emph{preintegration theory} that
properly addresses the manifold structure of the rotation group. We formally
discuss the generative measurement model as well as the nature of the rotation
noise and derive the expression for the \emph{maximum a posteriori} state
estimator. Our theoretical development enables the computation of all necessary
Jacobians for the optimization and a-posteriori bias correction in analytic
form. The second contribution is to show that the preintegrated IMU model can
be seamlessly integrated into a visual-inertial pipeline under the unifying
framework of factor graphs. This enables the application of
incremental-smoothing algorithms and the use of a \emph{structureless} model
for visual measurements, which avoids optimizing over the 3D points, further
accelerating the computation. We perform an extensive evaluation of our
monocular \VIO pipeline on real and simulated datasets. The results confirm
that our modelling effort leads to accurate state estimation in real-time,
outperforming state-of-the-art approaches.Comment: 20 pages, 24 figures, accepted for publication in IEEE Transactions
on Robotics (TRO) 201
Joint Localization Based on Split Covariance Intersection on the Lie Group
This paper presents a pose fusion method that
accounts for the possible correlations among measurements.
The proposed method can handle data fusion problems whose
uncertainty has both independent part and dependent part.
Different from the existing methods, the uncertainties of the
various states or measurements are modeled on the Lie algebra
and projected to the manifold through the exponential map,
which is more precise than that modeled in the vector space. The
dealing of the correlation is based on the theory of covariance
intersection, where the independent and dependent parts are split
to yield a more consistent result. In this paper, we provide a novel
method for correlated pose fusion algorithm on the manifold.
Theoretical derivation and analysis are detailed first, and then
the experimental results are presented to support the proposed
theory. The main contributions are threefold: (1) We provide a
theoretical foundation for the split covariance intersection filter
performed on the manifold, where the uncertainty is associated
on the Lie algebra. (2) The proposed method gives an explicit
fusion formalism on SE(3) and SE(2), which covers the most
use cases in the field of robotics. (3) We present a localization
framework that can work both for single robot and multi-robots
systems, where not only the fusion with possible correlation is
derived on the manifold, the state evolution and relative pose
computation are also performed on the manifold. Experimental
results validate its advantage over state-of-the-art methods
Robust state estimation methods for robotics applications
State estimation is an integral component of any autonomous robotic system. Finding the correct position, velocity, and orientation of an agent in its environment enables it to do other tasks like mapping and interacting with the environment, and collaborating with other agents. State estimation is achieved by using data obtained from multiple sensors and fusing them in a probabilistic framework. These include inertial data from Inertial Measurement Unit (IMU), images from camera, range data from lidars, and positioning data from Global Navigation Satellite Systems (GNSS) receivers. The main challenge faced in sensor-based state estimation is the presence of noisy, erroneous, and even lack of informative data. Some common examples of such situations include wrong feature matching between images or point clouds, false loop-closures due to perceptual aliasing (different places that look similar can confuse the robot), presence of dynamic objects in the environment (odometry algorithms assume a static environment), multipath errors for GNSS (signals for satellites jumping off tall structures like buildings before reaching receivers) and more. This work studies existing and new ways of how standard estimation algorithms like the Kalman filter and factor graphs can be made robust to such adverse conditions without losing performance in ideal outlier-free conditions. The first part of this work demonstrates the importance of robust Kalman filters on wheel-inertial odometry for high-slip terrain. Next, inertial data is integrated into GNSS factor graphs to improve the accuracy and robustness of GNSS factor graphs. Lastly, a combined framework for improving the robustness of non-linear least squares and estimating the inlier noise threshold is proposed and tested with point cloud registration and lidar-inertial odometry algorithms followed by an algorithmic analysis of optimizing generalized robust cost functions with factor graphs for GNSS positioning problem
- …