437 research outputs found
Contact-Aided Invariant Extended Kalman Filtering for Legged Robot State Estimation
This paper derives a contact-aided inertial navigation observer for a 3D
bipedal robot using the theory of invariant observer design. Aided inertial
navigation is fundamentally a nonlinear observer design problem; thus, current
solutions are based on approximations of the system dynamics, such as an
Extended Kalman Filter (EKF), which uses a system's Jacobian linearization
along the current best estimate of its trajectory. On the basis of the theory
of invariant observer design by Barrau and Bonnabel, and in particular, the
Invariant EKF (InEKF), we show that the error dynamics of the point
contact-inertial system follows a log-linear autonomous differential equation;
hence, the observable state variables can be rendered convergent with a domain
of attraction that is independent of the system's trajectory. Due to the
log-linear form of the error dynamics, it is not necessary to perform a
nonlinear observability analysis to show that when using an Inertial
Measurement Unit (IMU) and contact sensors, the absolute position of the robot
and a rotation about the gravity vector (yaw) are unobservable. We further
augment the state of the developed InEKF with IMU biases, as the online
estimation of these parameters has a crucial impact on system performance. We
evaluate the convergence of the proposed system with the commonly used
quaternion-based EKF observer using a Monte-Carlo simulation. In addition, our
experimental evaluation using a Cassie-series bipedal robot shows that the
contact-aided InEKF provides better performance in comparison with the
quaternion-based EKF as a result of exploiting symmetries present in the system
dynamics.Comment: Published in the proceedings of Robotics: Science and Systems 201
Keyframe-based visual–inertial odometry using nonlinear optimization
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
On-Manifold Preintegration for Real-Time Visual-Inertial Odometry
Current approaches for visual-inertial odometry (VIO) are able to attain
highly accurate state estimation via nonlinear optimization. However, real-time
optimization quickly becomes infeasible as the trajectory grows over time, this
problem is further emphasized by the fact that inertial measurements come at
high rate, hence leading to fast growth of the number of variables in the
optimization. In this paper, we address this issue by preintegrating inertial
measurements between selected keyframes into single relative motion
constraints. Our first contribution is a \emph{preintegration theory} that
properly addresses the manifold structure of the rotation group. We formally
discuss the generative measurement model as well as the nature of the rotation
noise and derive the expression for the \emph{maximum a posteriori} state
estimator. Our theoretical development enables the computation of all necessary
Jacobians for the optimization and a-posteriori bias correction in analytic
form. The second contribution is to show that the preintegrated IMU model can
be seamlessly integrated into a visual-inertial pipeline under the unifying
framework of factor graphs. This enables the application of
incremental-smoothing algorithms and the use of a \emph{structureless} model
for visual measurements, which avoids optimizing over the 3D points, further
accelerating the computation. We perform an extensive evaluation of our
monocular \VIO pipeline on real and simulated datasets. The results confirm
that our modelling effort leads to accurate state estimation in real-time,
outperforming state-of-the-art approaches.Comment: 20 pages, 24 figures, accepted for publication in IEEE Transactions
on Robotics (TRO) 201
A Comprehensive Introduction of Visual-Inertial Navigation
In this article, a tutorial introduction to visual-inertial navigation(VIN)
is presented. Visual and inertial perception are two complementary sensing
modalities. Cameras and inertial measurement units (IMU) are the corresponding
sensors for these two modalities. The low cost and light weight of camera-IMU
sensor combinations make them ubiquitous in robotic navigation. Visual-inertial
Navigation is a state estimation problem, that estimates the ego-motion and
local environment of the sensor platform. This paper presents visual-inertial
navigation in the classical state estimation framework, first illustrating the
estimation problem in terms of state variables and system models, including
related quantities representations (Parameterizations), IMU dynamic and camera
measurement models, and corresponding general probabilistic graphical models
(Factor Graph). Secondly, we investigate the existing model-based estimation
methodologies, these involve filter-based and optimization-based frameworks and
related on-manifold operations. We also discuss the calibration of some
relevant parameters, also initialization of state of interest in
optimization-based frameworks. Then the evaluation and improvement of VIN in
terms of accuracy, efficiency, and robustness are discussed. Finally, we
briefly mention the recent development of learning-based methods that may
become alternatives to traditional model-based methods.Comment: 35 pages, 10 figure
PIEKF-VIWO: Visual-Inertial-Wheel Odometry using Partial Invariant Extended Kalman Filter
Invariant Extended Kalman Filter (IEKF) has been successfully applied in
Visual-inertial Odometry (VIO) as an advanced achievement of Kalman filter,
showing great potential in sensor fusion. In this paper, we propose partial
IEKF (PIEKF), which only incorporates rotation-velocity state into the Lie
group structure and apply it for Visual-Inertial-Wheel Odometry (VIWO) to
improve positioning accuracy and consistency. Specifically, we derive the
rotation-velocity measurement model, which combines wheel measurements with
kinematic constraints. The model circumvents the wheel odometer's 3D
integration and covariance propagation, which is essential for filter
consistency. And a plane constraint is also introduced to enhance the position
accuracy. A dynamic outlier detection method is adopted, leveraging the
velocity state output. Through the simulation and real-world test, we validate
the effectiveness of our approach, which outperforms the standard Multi-State
Constraint Kalman Filter (MSCKF) based VIWO in consistency and accuracy
- …