2,309 research outputs found
State Estimation for a Humanoid Robot
This paper introduces a framework for state estimation on a humanoid robot
platform using only common proprioceptive sensors and knowledge of leg
kinematics. The presented approach extends that detailed in [1] on a quadruped
platform by incorporating the rotational constraints imposed by the humanoid's
flat feet. As in previous work, the proposed Extended Kalman Filter (EKF)
accommodates contact switching and makes no assumptions about gait or terrain,
making it applicable on any humanoid platform for use in any task. The filter
employs a sensor-based prediction model which uses inertial data from an IMU
and corrects for integrated error using a kinematics-based measurement model
which relies on joint encoders and a kinematic model to determine the relative
position and orientation of the feet. A nonlinear observability analysis is
performed on both the original and updated filters and it is concluded that the
new filter significantly simplifies singular cases and improves the
observability characteristics of the system. Results on simulated walking and
squatting datasets demonstrate the performance gain of the flat-foot filter as
well as confirm the results of the presented observability analysis.Comment: IROS 2014 Submission, IEEE/RSJ International Conference on
Intelligent Robots and Systems (2014) 952-95
Recommended from our members
Motion Planning for Optimal Information Gathering in Opportunistic Navigation Systems
Motion planning for optimal information gathering in an opportunistic navigation (OpNav)
environment is considered. An OpNav environment can be thought of as a radio
frequency signal landscape within which a receiver locates itself in space and time by extracting
information from ambient signals of opportunity (SOPs). The receiver is assumed
to draw only pseudorange-type observations from the SOPs, and such observations are
fused through an estimator to produce an estimate of the receiver’s own states. Since
not all SOP states in the OpNav environment may be known a priori, the receiver must
estimate the unknown SOP states of interest simultaneously with its own states. In this
work, the following problem is studied. A receiver with no a priori knowledge about its
own states is dropped in an unknown, yet observable, OpNav environment. Assuming that
the receiver can prescribe its own trajectory, what motion planning strategy should the
receiver adopt in order to build a high-fidelity map of the OpNav signal landscape, while
simultaneously localizing itself within this map in space and time? To answer this question,
first, the minimum conditions under which the OpNav environment is fully observable are
established, and the need for receiver maneuvering to achieve full observability is highlighted.
Then, motivated by the fact that not all trajectories a receiver may take in the
environment are equally beneficial from an information gathering point of view, a strategy
for planning the motion of the receiver is proposed. The strategy is formulated in a
coupled estimation and optimal control framework of a gradually identified system, where
optimality is defined through various information-theoretic measures. Simulation results
are presented to illustrate the improvements gained from adopting the proposed strategy
over random and pre-defined receiver trajectories.Aerospace Engineering and Engineering Mechanic
Observability, Identifiability and Sensitivity of Vision-Aided Navigation
We analyze the observability of motion estimates from the fusion of visual
and inertial sensors. Because the model contains unknown parameters, such as
sensor biases, the problem is usually cast as a mixed identification/filtering,
and the resulting observability analysis provides a necessary condition for any
algorithm to converge to a unique point estimate. Unfortunately, most models
treat sensor bias rates as noise, independent of other states including biases
themselves, an assumption that is patently violated in practice. When this
assumption is lifted, the resulting model is not observable, and therefore past
analyses cannot be used to conclude that the set of states that are
indistinguishable from the measurements is a singleton. In other words, the
resulting model is not observable. We therefore re-cast the analysis as one of
sensitivity: Rather than attempting to prove that the indistinguishable set is
a singleton, which is not the case, we derive bounds on its volume, as a
function of characteristics of the input and its sufficient excitation. This
provides an explicit characterization of the indistinguishable set that can be
used for analysis and validation purposes
Inertial-sensor bias estimation from brightness/depth images and based on SO(3)-invariant integro/partial-differential equations on the unit sphere
Constant biases associated to measured linear and angular velocities of a
moving object can be estimated from measurements of a static scene by embedded
brightness and depth sensors. We propose here a Lyapunov-based observer taking
advantage of the SO(3)-invariance of the partial differential equations
satisfied by the measured brightness and depth fields. The resulting asymptotic
observer is governed by a non-linear integro/partial differential system where
the two independent scalar variables indexing the pixels live on the unit
sphere of the 3D Euclidian space. The observer design and analysis are strongly
simplified by coordinate-free differential calculus on the unit sphere equipped
with its natural Riemannian structure. The observer convergence is investigated
under C^1 regularity assumptions on the object motion and its scene. It relies
on Ascoli-Arzela theorem and pre-compactness of the observer trajectories. It
is proved that the estimated biases converge towards the true ones, if and only
if, the scene admits no cylindrical symmetry. The observer design can be
adapted to realistic sensors where brightness and depth data are only available
on a subset of the unit sphere. Preliminary simulations with synthetic
brightness and depth images (corrupted by noise around 10%) indicate that such
Lyapunov-based observers should be robust and convergent for much weaker
regularity assumptions.Comment: 30 pages, 6 figures, submitte
On-Manifold Preintegration for Real-Time Visual-Inertial Odometry
Current approaches for visual-inertial odometry (VIO) are able to attain
highly accurate state estimation via nonlinear optimization. However, real-time
optimization quickly becomes infeasible as the trajectory grows over time, this
problem is further emphasized by the fact that inertial measurements come at
high rate, hence leading to fast growth of the number of variables in the
optimization. In this paper, we address this issue by preintegrating inertial
measurements between selected keyframes into single relative motion
constraints. Our first contribution is a \emph{preintegration theory} that
properly addresses the manifold structure of the rotation group. We formally
discuss the generative measurement model as well as the nature of the rotation
noise and derive the expression for the \emph{maximum a posteriori} state
estimator. Our theoretical development enables the computation of all necessary
Jacobians for the optimization and a-posteriori bias correction in analytic
form. The second contribution is to show that the preintegrated IMU model can
be seamlessly integrated into a visual-inertial pipeline under the unifying
framework of factor graphs. This enables the application of
incremental-smoothing algorithms and the use of a \emph{structureless} model
for visual measurements, which avoids optimizing over the 3D points, further
accelerating the computation. We perform an extensive evaluation of our
monocular \VIO pipeline on real and simulated datasets. The results confirm
that our modelling effort leads to accurate state estimation in real-time,
outperforming state-of-the-art approaches.Comment: 20 pages, 24 figures, accepted for publication in IEEE Transactions
on Robotics (TRO) 201
- …