619 research outputs found

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy

    Convergence and Consistency Analysis for A 3D Invariant-EKF SLAM

    Full text link
    In this paper, we investigate the convergence and consistency properties of an Invariant-Extended Kalman Filter (RI-EKF) based Simultaneous Localization and Mapping (SLAM) algorithm. Basic convergence properties of this algorithm are proven. These proofs do not require the restrictive assumption that the Jacobians of the motion and observation models need to be evaluated at the ground truth. It is also shown that the output of RI-EKF is invariant under any stochastic rigid body transformation in contrast to SO(3)\mathbb{SO}(3) based EKF SLAM algorithm (SO(3)\mathbb{SO}(3)-EKF) that is only invariant under deterministic rigid body transformation. Implications of these invariance properties on the consistency of the estimator are also discussed. Monte Carlo simulation results demonstrate that RI-EKF outperforms SO(3)\mathbb{SO}(3)-EKF, Robocentric-EKF and the "First Estimates Jacobian" EKF, for 3D point feature based SLAM

    State Estimation for a Humanoid Robot

    Get PDF
    This paper introduces a framework for state estimation on a humanoid robot platform using only common proprioceptive sensors and knowledge of leg kinematics. The presented approach extends that detailed in [1] on a quadruped platform by incorporating the rotational constraints imposed by the humanoid's flat feet. As in previous work, the proposed Extended Kalman Filter (EKF) accommodates contact switching and makes no assumptions about gait or terrain, making it applicable on any humanoid platform for use in any task. The filter employs a sensor-based prediction model which uses inertial data from an IMU and corrects for integrated error using a kinematics-based measurement model which relies on joint encoders and a kinematic model to determine the relative position and orientation of the feet. A nonlinear observability analysis is performed on both the original and updated filters and it is concluded that the new filter significantly simplifies singular cases and improves the observability characteristics of the system. Results on simulated walking and squatting datasets demonstrate the performance gain of the flat-foot filter as well as confirm the results of the presented observability analysis.Comment: IROS 2014 Submission, IEEE/RSJ International Conference on Intelligent Robots and Systems (2014) 952-95

    Navigational Drift Analysis for Visual Odometry

    Get PDF
    Visual odometry estimates a robot's ego-motion with cameras installed on itself. With the advantages brought by camera being a sensor, visual odometry has been widely adopted in robotics and navigation fields. Drift (or error accumulation) from relative motion concatenation is an intrinsic problem of visual odometry in long-range navigation, as visual odometry is a sensor based on relative measurements. General error analysis using ``mean'' and ``covariance'' of positional error in each axis is not fully capable to describe the behavior of drift. Moreover, no theoretic drift analysis is available for performance evaluation and algorithms comparison. Drift distribution is established in the paper, as a function of the covariance matrix from positional error propagation model. To validate the drift model, experiment with a specific setting is conducted
    • …
    corecore