2,309 research outputs found

    State Estimation for a Humanoid Robot

    Get PDF
    This paper introduces a framework for state estimation on a humanoid robot platform using only common proprioceptive sensors and knowledge of leg kinematics. The presented approach extends that detailed in [1] on a quadruped platform by incorporating the rotational constraints imposed by the humanoid's flat feet. As in previous work, the proposed Extended Kalman Filter (EKF) accommodates contact switching and makes no assumptions about gait or terrain, making it applicable on any humanoid platform for use in any task. The filter employs a sensor-based prediction model which uses inertial data from an IMU and corrects for integrated error using a kinematics-based measurement model which relies on joint encoders and a kinematic model to determine the relative position and orientation of the feet. A nonlinear observability analysis is performed on both the original and updated filters and it is concluded that the new filter significantly simplifies singular cases and improves the observability characteristics of the system. Results on simulated walking and squatting datasets demonstrate the performance gain of the flat-foot filter as well as confirm the results of the presented observability analysis.Comment: IROS 2014 Submission, IEEE/RSJ International Conference on Intelligent Robots and Systems (2014) 952-95

    Observability, Identifiability and Sensitivity of Vision-Aided Navigation

    Full text link
    We analyze the observability of motion estimates from the fusion of visual and inertial sensors. Because the model contains unknown parameters, such as sensor biases, the problem is usually cast as a mixed identification/filtering, and the resulting observability analysis provides a necessary condition for any algorithm to converge to a unique point estimate. Unfortunately, most models treat sensor bias rates as noise, independent of other states including biases themselves, an assumption that is patently violated in practice. When this assumption is lifted, the resulting model is not observable, and therefore past analyses cannot be used to conclude that the set of states that are indistinguishable from the measurements is a singleton. In other words, the resulting model is not observable. We therefore re-cast the analysis as one of sensitivity: Rather than attempting to prove that the indistinguishable set is a singleton, which is not the case, we derive bounds on its volume, as a function of characteristics of the input and its sufficient excitation. This provides an explicit characterization of the indistinguishable set that can be used for analysis and validation purposes

    Inertial-sensor bias estimation from brightness/depth images and based on SO(3)-invariant integro/partial-differential equations on the unit sphere

    Full text link
    Constant biases associated to measured linear and angular velocities of a moving object can be estimated from measurements of a static scene by embedded brightness and depth sensors. We propose here a Lyapunov-based observer taking advantage of the SO(3)-invariance of the partial differential equations satisfied by the measured brightness and depth fields. The resulting asymptotic observer is governed by a non-linear integro/partial differential system where the two independent scalar variables indexing the pixels live on the unit sphere of the 3D Euclidian space. The observer design and analysis are strongly simplified by coordinate-free differential calculus on the unit sphere equipped with its natural Riemannian structure. The observer convergence is investigated under C^1 regularity assumptions on the object motion and its scene. It relies on Ascoli-Arzela theorem and pre-compactness of the observer trajectories. It is proved that the estimated biases converge towards the true ones, if and only if, the scene admits no cylindrical symmetry. The observer design can be adapted to realistic sensors where brightness and depth data are only available on a subset of the unit sphere. Preliminary simulations with synthetic brightness and depth images (corrupted by noise around 10%) indicate that such Lyapunov-based observers should be robust and convergent for much weaker regularity assumptions.Comment: 30 pages, 6 figures, submitte

    On-Manifold Preintegration for Real-Time Visual-Inertial Odometry

    Get PDF
    Current approaches for visual-inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time, this problem is further emphasized by the fact that inertial measurements come at high rate, hence leading to fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a \emph{preintegration theory} that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the \emph{maximum a posteriori} state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a-posteriori bias correction in analytic form. The second contribution is to show that the preintegrated IMU model can be seamlessly integrated into a visual-inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a \emph{structureless} model for visual measurements, which avoids optimizing over the 3D points, further accelerating the computation. We perform an extensive evaluation of our monocular \VIO pipeline on real and simulated datasets. The results confirm that our modelling effort leads to accurate state estimation in real-time, outperforming state-of-the-art approaches.Comment: 20 pages, 24 figures, accepted for publication in IEEE Transactions on Robotics (TRO) 201
    • …
    corecore