1,221 research outputs found
Contact-Aided Invariant Extended Kalman Filtering for Legged Robot State Estimation
This paper derives a contact-aided inertial navigation observer for a 3D
bipedal robot using the theory of invariant observer design. Aided inertial
navigation is fundamentally a nonlinear observer design problem; thus, current
solutions are based on approximations of the system dynamics, such as an
Extended Kalman Filter (EKF), which uses a system's Jacobian linearization
along the current best estimate of its trajectory. On the basis of the theory
of invariant observer design by Barrau and Bonnabel, and in particular, the
Invariant EKF (InEKF), we show that the error dynamics of the point
contact-inertial system follows a log-linear autonomous differential equation;
hence, the observable state variables can be rendered convergent with a domain
of attraction that is independent of the system's trajectory. Due to the
log-linear form of the error dynamics, it is not necessary to perform a
nonlinear observability analysis to show that when using an Inertial
Measurement Unit (IMU) and contact sensors, the absolute position of the robot
and a rotation about the gravity vector (yaw) are unobservable. We further
augment the state of the developed InEKF with IMU biases, as the online
estimation of these parameters has a crucial impact on system performance. We
evaluate the convergence of the proposed system with the commonly used
quaternion-based EKF observer using a Monte-Carlo simulation. In addition, our
experimental evaluation using a Cassie-series bipedal robot shows that the
contact-aided InEKF provides better performance in comparison with the
quaternion-based EKF as a result of exploiting symmetries present in the system
dynamics.Comment: Published in the proceedings of Robotics: Science and Systems 201
Attention and Anticipation in Fast Visual-Inertial Navigation
We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to
estimate its state using an on-board camera and an inertial sensor, without any
prior knowledge of the external environment. We consider the case in which the
robot can allocate limited resources to VIN, due to tight computational
constraints. Therefore, we answer the following question: under limited
resources, what are the most relevant visual cues to maximize the performance
of visual-inertial navigation? Our approach has four key ingredients. First, it
is task-driven, in that the selection of the visual cues is guided by a metric
quantifying the VIN performance. Second, it exploits the notion of
anticipation, since it uses a simplified model for forward-simulation of robot
dynamics, predicting the utility of a set of visual cues over a future time
horizon. Third, it is efficient and easy to implement, since it leads to a
greedy algorithm for the selection of the most relevant visual cues. Fourth, it
provides formal performance guarantees: we leverage submodularity to prove that
the greedy selection cannot be far from the optimal (combinatorial) selection.
Simulations and real experiments on agile drones show that our approach ensures
state-of-the-art VIN performance while maintaining a lean processing time. In
the easy scenarios, our approach outperforms appearance-based feature selection
in terms of localization errors. In the most challenging scenarios, it enables
accurate visual-inertial navigation while appearance-based feature selection
fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table
- …