18,159 research outputs found
3D Position Tracking in Challenging Terrain
The intent of this paper is to show how the accuracy of 3D position tracking can be improved by considering rover locomotion in rough terrain as a holistic problem. An appropriate locomotion concept endowed with a controller min- imizing slip improves the climbing performance, the accuracy of odometry and the signal/noise ratio of the onboard sensors. Sensor fusion involving an inertial mea- surement unit, 3D-Odometry, and visual motion estimation is presented. The exper- imental results show clearly how each sensor contributes to increase the accuracy of the 3D pose estimation in rough terrain
Keep Rollin' - Whole-Body Motion Control and Planning for Wheeled Quadrupedal Robots
We show dynamic locomotion strategies for wheeled quadrupedal robots, which
combine the advantages of both walking and driving. The developed optimization
framework tightly integrates the additional degrees of freedom introduced by
the wheels. Our approach relies on a zero-moment point based motion
optimization which continuously updates reference trajectories. The reference
motions are tracked by a hierarchical whole-body controller which computes
optimal generalized accelerations and contact forces by solving a sequence of
prioritized tasks including the nonholonomic rolling constraints. Our approach
has been tested on ANYmal, a quadrupedal robot that is fully torque-controlled
including the non-steerable wheels attached to its legs. We conducted
experiments on flat and inclined terrains as well as over steps, whereby we
show that integrating the wheels into the motion control and planning framework
results in intuitive motion trajectories, which enable more robust and dynamic
locomotion compared to other wheeled-legged robots. Moreover, with a speed of 4
m/s and a reduction of the cost of transport by 83 % we prove the superiority
of wheeled-legged robots compared to their legged counterparts.Comment: IEEE Robotics and Automation Letter
Fast and Continuous Foothold Adaptation for Dynamic Locomotion through CNNs
Legged robots can outperform wheeled machines for most navigation tasks
across unknown and rough terrains. For such tasks, visual feedback is a
fundamental asset to provide robots with terrain-awareness. However, robust
dynamic locomotion on difficult terrains with real-time performance guarantees
remains a challenge. We present here a real-time, dynamic foothold adaptation
strategy based on visual feedback. Our method adjusts the landing position of
the feet in a fully reactive manner, using only on-board computers and sensors.
The correction is computed and executed continuously along the swing phase
trajectory of each leg. To efficiently adapt the landing position, we implement
a self-supervised foothold classifier based on a Convolutional Neural Network
(CNN). Our method results in an up to 200 times faster computation with respect
to the full-blown heuristics. Our goal is to react to visual stimuli from the
environment, bridging the gap between blind reactive locomotion and purely
vision-based planning strategies. We assess the performance of our method on
the dynamic quadruped robot HyQ, executing static and dynamic gaits (at speeds
up to 0.5 m/s) in both simulated and real scenarios; the benefit of safe
foothold adaptation is clearly demonstrated by the overall robot behavior.Comment: 9 pages, 11 figures. Accepted to RA-L + ICRA 2019, January 201
- …