801 research outputs found
On-Manifold Preintegration for Real-Time Visual-Inertial Odometry
Current approaches for visual-inertial odometry (VIO) are able to attain
highly accurate state estimation via nonlinear optimization. However, real-time
optimization quickly becomes infeasible as the trajectory grows over time, this
problem is further emphasized by the fact that inertial measurements come at
high rate, hence leading to fast growth of the number of variables in the
optimization. In this paper, we address this issue by preintegrating inertial
measurements between selected keyframes into single relative motion
constraints. Our first contribution is a \emph{preintegration theory} that
properly addresses the manifold structure of the rotation group. We formally
discuss the generative measurement model as well as the nature of the rotation
noise and derive the expression for the \emph{maximum a posteriori} state
estimator. Our theoretical development enables the computation of all necessary
Jacobians for the optimization and a-posteriori bias correction in analytic
form. The second contribution is to show that the preintegrated IMU model can
be seamlessly integrated into a visual-inertial pipeline under the unifying
framework of factor graphs. This enables the application of
incremental-smoothing algorithms and the use of a \emph{structureless} model
for visual measurements, which avoids optimizing over the 3D points, further
accelerating the computation. We perform an extensive evaluation of our
monocular \VIO pipeline on real and simulated datasets. The results confirm
that our modelling effort leads to accurate state estimation in real-time,
outperforming state-of-the-art approaches.Comment: 20 pages, 24 figures, accepted for publication in IEEE Transactions
on Robotics (TRO) 201
A Comprehensive Introduction of Visual-Inertial Navigation
In this article, a tutorial introduction to visual-inertial navigation(VIN)
is presented. Visual and inertial perception are two complementary sensing
modalities. Cameras and inertial measurement units (IMU) are the corresponding
sensors for these two modalities. The low cost and light weight of camera-IMU
sensor combinations make them ubiquitous in robotic navigation. Visual-inertial
Navigation is a state estimation problem, that estimates the ego-motion and
local environment of the sensor platform. This paper presents visual-inertial
navigation in the classical state estimation framework, first illustrating the
estimation problem in terms of state variables and system models, including
related quantities representations (Parameterizations), IMU dynamic and camera
measurement models, and corresponding general probabilistic graphical models
(Factor Graph). Secondly, we investigate the existing model-based estimation
methodologies, these involve filter-based and optimization-based frameworks and
related on-manifold operations. We also discuss the calibration of some
relevant parameters, also initialization of state of interest in
optimization-based frameworks. Then the evaluation and improvement of VIN in
terms of accuracy, efficiency, and robustness are discussed. Finally, we
briefly mention the recent development of learning-based methods that may
become alternatives to traditional model-based methods.Comment: 35 pages, 10 figure
Computationally-efficient visual inertial odometry for autonomous vehicle
This thesis presents the design, implementation, and validation of a novel nonlinearfiltering
based Visual Inertial Odometry (VIO) framework for robotic navigation in GPSdenied
environments. The system attempts to track the vehicle’s ego-motion at each time
instant while capturing the benefits of both the camera information and the Inertial Measurement
Unit (IMU). VIO demands considerable computational resources and processing
time, and this makes the hardware implementation quite challenging for micro- and nanorobotic
systems. In many cases, the VIO process selects a small subset of tracked features
to reduce the computational cost. VIO estimation also suffers from the inevitable accumulation
of error. This limitation makes the estimation gradually diverge and even fail to
track the vehicle trajectory over long-term operation. Deploying optimization for the entire
trajectory helps to minimize the accumulative errors, but increases the computational cost
significantly. The VIO hardware implementation can utilize a more powerful processor
and specialized hardware computing platforms, such as Field Programmable Gate Arrays,
Graphics Processing Units and Application-Specific Integrated Circuits, to accelerate the
execution. However, the computation still needs to perform identical computational steps
with similar complexity. Processing data at a higher frequency increases energy consumption
significantly. The development of advanced hardware systems is also expensive and
time-consuming. Consequently, the approach of developing an efficient algorithm will be
beneficial with or without hardware acceleration. The research described in this thesis
proposes multiple solutions to accelerate the visual inertial odometry computation while
maintaining a comparative estimation accuracy over long-term operation among state-ofthe-
art algorithms.
This research has resulted in three significant contributions. First, this research involved
the design and validation of a novel nonlinear filtering sensor-fusion algorithm using trifocal
tensor geometry and a cubature Kalman filter. The combination has handled the system
nonlinearity effectively, while reducing the computational cost and system complexity significantly.
Second, this research develops two solutions to address the error accumulation
issue. For standalone self-localization projects, the first solution applies a local optimization
procedure for the measurement update, which performs multiple corrections on a single
measurement to optimize the latest filter state and covariance. For larger navigation
projects, the second solution integrates VIO with additional pseudo-ranging measurements
between the vehicle and multiple beacons in order to bound the accumulative errors. Third,
this research develops a novel parallel-processing VIO algorithm to speed up the execution
using a multi-core CPU. This allows the distribution of the filtering computation on each
core to process and optimize each feature measurement update independently.
The performance of the proposed visual inertial odometry framework is evaluated using
publicly-available self-localization datasets, for comparison with some other open-source
algorithms. The results illustrate that a proposed VIO framework is able to improve the
VIO’s computational efficiency without the installation of specialized hardware computing
platforms and advanced software libraries
Joint Localization Based on Split Covariance Intersection on the Lie Group
This paper presents a pose fusion method that
accounts for the possible correlations among measurements.
The proposed method can handle data fusion problems whose
uncertainty has both independent part and dependent part.
Different from the existing methods, the uncertainties of the
various states or measurements are modeled on the Lie algebra
and projected to the manifold through the exponential map,
which is more precise than that modeled in the vector space. The
dealing of the correlation is based on the theory of covariance
intersection, where the independent and dependent parts are split
to yield a more consistent result. In this paper, we provide a novel
method for correlated pose fusion algorithm on the manifold.
Theoretical derivation and analysis are detailed first, and then
the experimental results are presented to support the proposed
theory. The main contributions are threefold: (1) We provide a
theoretical foundation for the split covariance intersection filter
performed on the manifold, where the uncertainty is associated
on the Lie algebra. (2) The proposed method gives an explicit
fusion formalism on SE(3) and SE(2), which covers the most
use cases in the field of robotics. (3) We present a localization
framework that can work both for single robot and multi-robots
systems, where not only the fusion with possible correlation is
derived on the manifold, the state evolution and relative pose
computation are also performed on the manifold. Experimental
results validate its advantage over state-of-the-art methods
Investigations of a compartmental model for leucine kinetics using nonlinear mixed effects models with ordinary and stochastic differential equations
Nonlinear mixed effects models represent a powerful tool to simultaneously
analyze data from several individuals. In this study a compartmental model of
leucine kinetics is examined and extended with a stochastic differential
equation to model non-steady state concentrations of free leucine in the
plasma. Data obtained from tracer/tracee experiments for a group of healthy
control individuals and a group of individuals suffering from diabetes mellitus
type 2 are analyzed. We find that the interindividual variation of the model
parameters is much smaller for the nonlinear mixed effects models, compared to
traditional estimates obtained from each individual separately. Using the mixed
effects approach, the population parameters are estimated well also when only
half of the data are used for each individual. For a typical individual the
amount of free leucine is predicted to vary with a standard deviation of 8.9%
around a mean value during the experiment. Moreover, leucine degradation and
protein uptake of leucine is smaller, proteolysis larger, and the amount of
free leucine in the body is much larger for the diabetic individuals than the
control individuals. In conclusion nonlinear mixed effects models offers
improved estimates for model parameters in complex models based on
tracer/tracee data and may be a suitable tool to reduce data sampling in
clinical studies
Lie Group Modelling for an EKF-Based Monocular SLAM Algorithm
This paper addresses the problem of monocular Simultaneous Localization And Mapping on Lie groups using fiducial patterns. For that purpose, we propose a reformulation of the classical camera model as a model on matrix Lie groups. Thus, we define an original-state vector containing the camera pose and the set of transformations from the world frame to each pattern, which constitutes the map’s state. Each element of the map’s state, as well as the camera pose, are intrinsically constrained to evolve on the matrix Lie group SE(3). Filtering is then performed by an extended Kalman filter dedicated to matrix Lie groups to solve the visual SLAM process (LG-EKF-VSLAM). This algorithm has been evaluated in different scenarios based on simulated data as well as real data. The results show that the LG-EKF-VSLAM can improve the absolute position and orientation accuracy, compared to a classical EKF visual SLAM (EKF-VSLAM)
Robust state estimation methods for robotics applications
State estimation is an integral component of any autonomous robotic system. Finding the correct position, velocity, and orientation of an agent in its environment enables it to do other tasks like mapping and interacting with the environment, and collaborating with other agents. State estimation is achieved by using data obtained from multiple sensors and fusing them in a probabilistic framework. These include inertial data from Inertial Measurement Unit (IMU), images from camera, range data from lidars, and positioning data from Global Navigation Satellite Systems (GNSS) receivers. The main challenge faced in sensor-based state estimation is the presence of noisy, erroneous, and even lack of informative data. Some common examples of such situations include wrong feature matching between images or point clouds, false loop-closures due to perceptual aliasing (different places that look similar can confuse the robot), presence of dynamic objects in the environment (odometry algorithms assume a static environment), multipath errors for GNSS (signals for satellites jumping off tall structures like buildings before reaching receivers) and more. This work studies existing and new ways of how standard estimation algorithms like the Kalman filter and factor graphs can be made robust to such adverse conditions without losing performance in ideal outlier-free conditions. The first part of this work demonstrates the importance of robust Kalman filters on wheel-inertial odometry for high-slip terrain. Next, inertial data is integrated into GNSS factor graphs to improve the accuracy and robustness of GNSS factor graphs. Lastly, a combined framework for improving the robustness of non-linear least squares and estimating the inlier noise threshold is proposed and tested with point cloud registration and lidar-inertial odometry algorithms followed by an algorithmic analysis of optimizing generalized robust cost functions with factor graphs for GNSS positioning problem
Reliable GNSS Joint Position and Attitude Estimation in Harsh Environments through Robust Statistics
Next-generation navigation systems require precise and robust solutions, providing information about both the system position and its attitude, of particular interest in intelligent transportation systems and robotics applications. Within this context, Global Navigation Satellite Systems (GNSS) are the main source of positioning data and, in multiple antenna setups, can also provide attitude information. Notice that the use of phase observables is mandatory to obtain a precise solution. In this contribution, we leverage the recently introduced recursive GNSS joint position and attitude (JPA) estimation framework, which has been shown to provide good performance under nominal conditions. The main goal is to further elaborate the JPA problem and to propose a new robust filtering solution able to mitigate the impact of possible outliers, which may otherwise cause a performance breakdown of standard JPA, because of the sensitivity of carrier phase measurements. Illustrative results are provided to support the discussion and show the performance improvement of the proposed approach
- …