7,936 research outputs found
Gaussian-Process-based Robot Learning from Demonstration
Endowed with higher levels of autonomy, robots are required to perform
increasingly complex manipulation tasks. Learning from demonstration is arising
as a promising paradigm for transferring skills to robots. It allows to
implicitly learn task constraints from observing the motion executed by a human
teacher, which can enable adaptive behavior. We present a novel
Gaussian-Process-based learning from demonstration approach. This probabilistic
representation allows to generalize over multiple demonstrations, and encode
variability along the different phases of the task. In this paper, we address
how Gaussian Processes can be used to effectively learn a policy from
trajectories in task space. We also present a method to efficiently adapt the
policy to fulfill new requirements, and to modulate the robot behavior as a
function of task variability. This approach is illustrated through a real-world
application using the TIAGo robot.Comment: 8 pages, 10 figure
Keep Rollin' - Whole-Body Motion Control and Planning for Wheeled Quadrupedal Robots
We show dynamic locomotion strategies for wheeled quadrupedal robots, which
combine the advantages of both walking and driving. The developed optimization
framework tightly integrates the additional degrees of freedom introduced by
the wheels. Our approach relies on a zero-moment point based motion
optimization which continuously updates reference trajectories. The reference
motions are tracked by a hierarchical whole-body controller which computes
optimal generalized accelerations and contact forces by solving a sequence of
prioritized tasks including the nonholonomic rolling constraints. Our approach
has been tested on ANYmal, a quadrupedal robot that is fully torque-controlled
including the non-steerable wheels attached to its legs. We conducted
experiments on flat and inclined terrains as well as over steps, whereby we
show that integrating the wheels into the motion control and planning framework
results in intuitive motion trajectories, which enable more robust and dynamic
locomotion compared to other wheeled-legged robots. Moreover, with a speed of 4
m/s and a reduction of the cost of transport by 83 % we prove the superiority
of wheeled-legged robots compared to their legged counterparts.Comment: IEEE Robotics and Automation Letter
Robust Legged Robot State Estimation Using Factor Graph Optimization
Legged robots, specifically quadrupeds, are becoming increasingly attractive
for industrial applications such as inspection. However, to leave the
laboratory and to become useful to an end user requires reliability in harsh
conditions. From the perspective of state estimation, it is essential to be
able to accurately estimate the robot's state despite challenges such as uneven
or slippery terrain, textureless and reflective scenes, as well as dynamic
camera occlusions. We are motivated to reduce the dependency on foot contact
classifications, which fail when slipping, and to reduce position drift during
dynamic motions such as trotting. To this end, we present a factor graph
optimization method for state estimation which tightly fuses and smooths
inertial navigation, leg odometry and visual odometry. The effectiveness of the
approach is demonstrated using the ANYmal quadruped robot navigating in a
realistic outdoor industrial environment. This experiment included trotting,
walking, crossing obstacles and ascending a staircase. The proposed approach
decreased the relative position error by up to 55% and absolute position error
by 76% compared to kinematic-inertial odometry.Comment: 8 pages, 12 figures. Accepted to RA-L + IROS 2019, July 201
AgriColMap: Aerial-Ground Collaborative 3D Mapping for Precision Farming
The combination of aerial survey capabilities of Unmanned Aerial Vehicles
with targeted intervention abilities of agricultural Unmanned Ground Vehicles
can significantly improve the effectiveness of robotic systems applied to
precision agriculture. In this context, building and updating a common map of
the field is an essential but challenging task. The maps built using robots of
different types show differences in size, resolution and scale, the associated
geolocation data may be inaccurate and biased, while the repetitiveness of both
visual appearance and geometric structures found within agricultural contexts
render classical map merging techniques ineffective. In this paper we propose
AgriColMap, a novel map registration pipeline that leverages a grid-based
multimodal environment representation which includes a vegetation index map and
a Digital Surface Model. We cast the data association problem between maps
built from UAVs and UGVs as a multimodal, large displacement dense optical flow
estimation. The dominant, coherent flows, selected using a voting scheme, are
used as point-to-point correspondences to infer a preliminary non-rigid
alignment between the maps. A final refinement is then performed, by exploiting
only meaningful parts of the registered maps. We evaluate our system using real
world data for 3 fields with different crop species. The results show that our
method outperforms several state of the art map registration and matching
techniques by a large margin, and has a higher tolerance to large initial
misalignments. We release an implementation of the proposed approach along with
the acquired datasets with this paper.Comment: Published in IEEE Robotics and Automation Letters, 201
Fast and Continuous Foothold Adaptation for Dynamic Locomotion through CNNs
Legged robots can outperform wheeled machines for most navigation tasks
across unknown and rough terrains. For such tasks, visual feedback is a
fundamental asset to provide robots with terrain-awareness. However, robust
dynamic locomotion on difficult terrains with real-time performance guarantees
remains a challenge. We present here a real-time, dynamic foothold adaptation
strategy based on visual feedback. Our method adjusts the landing position of
the feet in a fully reactive manner, using only on-board computers and sensors.
The correction is computed and executed continuously along the swing phase
trajectory of each leg. To efficiently adapt the landing position, we implement
a self-supervised foothold classifier based on a Convolutional Neural Network
(CNN). Our method results in an up to 200 times faster computation with respect
to the full-blown heuristics. Our goal is to react to visual stimuli from the
environment, bridging the gap between blind reactive locomotion and purely
vision-based planning strategies. We assess the performance of our method on
the dynamic quadruped robot HyQ, executing static and dynamic gaits (at speeds
up to 0.5 m/s) in both simulated and real scenarios; the benefit of safe
foothold adaptation is clearly demonstrated by the overall robot behavior.Comment: 9 pages, 11 figures. Accepted to RA-L + ICRA 2019, January 201
- …