144 research outputs found
Robust Legged Robot State Estimation Using Factor Graph Optimization
Legged robots, specifically quadrupeds, are becoming increasingly attractive
for industrial applications such as inspection. However, to leave the
laboratory and to become useful to an end user requires reliability in harsh
conditions. From the perspective of state estimation, it is essential to be
able to accurately estimate the robot's state despite challenges such as uneven
or slippery terrain, textureless and reflective scenes, as well as dynamic
camera occlusions. We are motivated to reduce the dependency on foot contact
classifications, which fail when slipping, and to reduce position drift during
dynamic motions such as trotting. To this end, we present a factor graph
optimization method for state estimation which tightly fuses and smooths
inertial navigation, leg odometry and visual odometry. The effectiveness of the
approach is demonstrated using the ANYmal quadruped robot navigating in a
realistic outdoor industrial environment. This experiment included trotting,
walking, crossing obstacles and ascending a staircase. The proposed approach
decreased the relative position error by up to 55% and absolute position error
by 76% compared to kinematic-inertial odometry.Comment: 8 pages, 12 figures. Accepted to RA-L + IROS 2019, July 201
Neural Volumetric Memory for Visual Locomotion Control
Legged robots have the potential to expand the reach of autonomy beyond paved
roads. In this work, we consider the difficult problem of locomotion on
challenging terrains using a single forward-facing depth camera. Due to the
partial observability of the problem, the robot has to rely on past
observations to infer the terrain currently beneath it. To solve this problem,
we follow the paradigm in computer vision that explicitly models the 3D
geometry of the scene and propose Neural Volumetric Memory (NVM), a geometric
memory architecture that explicitly accounts for the SE(3) equivariance of the
3D world. NVM aggregates feature volumes from multiple camera views by first
bringing them back to the ego-centric frame of the robot. We test the learned
visual-locomotion policy on a physical robot and show that our approach, which
explicitly introduces geometric priors during training, offers superior
performance than more na\"ive methods. We also include ablation studies and
show that the representations stored in the neural volumetric memory capture
sufficient geometric information to reconstruct the scene. Our project page
with videos is https://rchalyang.github.io/NVM .Comment: CVPR 2023 Highlight. Our project page with videos is
https://rchalyang.github.io/NV
Humanoid Robots
For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion
Rethinking Sim2Real: Lower Fidelity Simulation Leads to Higher Sim2Real Transfer in Navigation
If we want to train robots in simulation before deploying them in reality, it
seems natural and almost self-evident to presume that reducing the sim2real gap
involves creating simulators of increasing fidelity (since reality is what it
is). We challenge this assumption and present a contrary hypothesis -- sim2real
transfer of robots may be improved with lower (not higher) fidelity simulation.
We conduct a systematic large-scale evaluation of this hypothesis on the
problem of visual navigation -- in the real world, and on 2 different
simulators (Habitat and iGibson) using 3 different robots (A1, AlienGo, Spot).
Our results show that, contrary to expectation, adding fidelity does not help
with learning; performance is poor due to slow simulation speed (preventing
large-scale learning) and overfitting to inaccuracies in simulation physics.
Instead, building simple models of the robot motion using real-world data can
improve learning and generalization
Coupling Vision and Proprioception for Navigation of Legged Robots
We exploit the complementary strengths of vision and proprioception to
develop a point-goal navigation system for legged robots, called VP-Nav. Legged
systems are capable of traversing more complex terrain than wheeled robots, but
to fully utilize this capability, we need a high-level path planner in the
navigation system to be aware of the walking capabilities of the low-level
locomotion policy in varying environments. We achieve this by using
proprioceptive feedback to ensure the safety of the planned path by sensing
unexpected obstacles like glass walls, terrain properties like slipperiness or
softness of the ground and robot properties like extra payload that are likely
missed by vision. The navigation system uses onboard cameras to generate an
occupancy map and a corresponding cost map to reach the goal. A fast marching
planner then generates a target path. A velocity command generator takes this
as input to generate the desired velocity for the walking policy. A safety
advisor module adds sensed unexpected obstacles to the occupancy map and
environment-determined speed limits to the velocity command generator. We show
superior performance compared to wheeled robot baselines, and ablation studies
which have disjoint high-level planning and low-level control. We also show the
real-world deployment of VP-Nav on a quadruped robot with onboard sensors and
computation. Videos at https://navigation-locomotion.github.ioComment: CVPR 2022 final version. Website at
https://navigation-locomotion.github.i
- …