1,220 research outputs found
Autonomous Locomotion Mode Transition Simulation of a Track-legged Quadruped Robot Step Negotiation
Multi-modal locomotion (e.g. terrestrial, aerial, and aquatic) is gaining
increasing interest in robotics research as it improves the robots
environmental adaptability, locomotion versatility, and operational
flexibility. Within the terrestrial multiple locomotion robots, the advantage
of hybrid robots stems from their multiple (two or more) locomotion modes,
among which robots can select from depending on the encountering terrain
conditions. However, there are many challenges in improving the autonomy of the
locomotion mode transition between their multiple locomotion modes. This work
proposed a method to realize an autonomous locomotion mode transition of a
track-legged quadruped robot steps negotiation. The autonomy of the
decision-making process was realized by the proposed criterion to comparing
energy performances of the rolling and walking locomotion modes. Two climbing
gaits were proposed to achieve smooth steps negotiation behaviours for energy
evaluation purposes. Simulations showed autonomous locomotion mode transitions
were realized for negotiations of steps with different height. The proposed
method is generic enough to be utilized to other hybrid robots after some
pre-studies of their locomotion energy performances
Material Recognition CNNs and Hierarchical Planning for Biped Robot Locomotion on Slippery Terrain
In this paper we tackle the problem of visually predicting surface friction
for environments with diverse surfaces, and integrating this knowledge into
biped robot locomotion planning. The problem is essential for autonomous robot
locomotion since diverse surfaces with varying friction abound in the real
world, from wood to ceramic tiles, grass or ice, which may cause difficulties
or huge energy costs for robot locomotion if not considered. We propose to
estimate friction and its uncertainty from visual estimation of material
classes using convolutional neural networks, together with probability
distribution functions of friction associated with each material. We then
robustly integrate the friction predictions into a hierarchical (footstep and
full-body) planning method using chance constraints, and optimize the same
trajectory costs at both levels of the planning method for consistency. Our
solution achieves fully autonomous perception and locomotion on slippery
terrain, which considers not only friction and its uncertainty, but also
collision, stability and trajectory cost. We show promising friction prediction
results in real pictures of outdoor scenarios, and planning experiments on a
real robot facing surfaces with different friction
Towards an Autonomous Walking Robot for Planetary Surfaces
In this paper, recent progress in the development of
the DLR Crawler - a six-legged, actively compliant walking
robot prototype - is presented. The robot implements
a walking layer with a simple tripod and a more complex
biologically inspired gait. Using a variety of proprioceptive
sensors, different reflexes for reactively crossing obstacles
within the walking height are realised. On top of
the walking layer, a navigation layer provides the ability
to autonomously navigate to a predefined goal point in
unknown rough terrain using a stereo camera. A model
of the environment is created, the terrain traversability is
estimated and an optimal path is planned. The difficulty
of the path can be influenced by behavioral parameters.
Motion commands are sent to the walking layer and the
gait pattern is switched according to the estimated terrain
difficulty. The interaction between walking layer and navigation
layer was tested in different experimental setups
Footstep and Motion Planning in Semi-unstructured Environments Using Randomized Possibility Graphs
Traversing environments with arbitrary obstacles poses significant challenges
for bipedal robots. In some cases, whole body motions may be necessary to
maneuver around an obstacle, but most existing footstep planners can only
select from a discrete set of predetermined footstep actions; they are unable
to utilize the continuum of whole body motion that is truly available to the
robot platform. Existing motion planners that can utilize whole body motion
tend to struggle with the complexity of large-scale problems. We introduce a
planning method, called the "Randomized Possibility Graph", which uses
high-level approximations of constraint manifolds to rapidly explore the
"possibility" of actions, thereby allowing lower-level motion planners to be
utilized more efficiently. We demonstrate simulations of the method working in
a variety of semi-unstructured environments. In this context,
"semi-unstructured" means the walkable terrain is flat and even, but there are
arbitrary 3D obstacles throughout the environment which may need to be stepped
over or maneuvered around using whole body motions.Comment: Accepted by IEEE International Conference on Robotics and Automation
201
Learning Ground Traversability from Simulations
Mobile ground robots operating on unstructured terrain must predict which
areas of the environment they are able to pass in order to plan feasible paths.
We address traversability estimation as a heightmap classification problem: we
build a convolutional neural network that, given an image representing the
heightmap of a terrain patch, predicts whether the robot will be able to
traverse such patch from left to right. The classifier is trained for a
specific robot model (wheeled, tracked, legged, snake-like) using simulation
data on procedurally generated training terrains; the trained classifier can be
applied to unseen large heightmaps to yield oriented traversability maps, and
then plan traversable paths. We extensively evaluate the approach in simulation
on six real-world elevation datasets, and run a real-robot validation in one
indoor and one outdoor environment.Comment: Webpage: http://romarcg.xyz/traversability_estimation
Unsupervised Contact Learning for Humanoid Estimation and Control
This work presents a method for contact state estimation using fuzzy
clustering to learn contact probability for full, six-dimensional humanoid
contacts. The data required for training is solely from proprioceptive sensors
- endeffector contact wrench sensors and inertial measurement units (IMUs) -
and the method is completely unsupervised. The resulting cluster means are used
to efficiently compute the probability of contact in each of the six
endeffector degrees of freedom (DoFs) independently. This clustering-based
contact probability estimator is validated in a kinematics-based base state
estimator in a simulation environment with realistic added sensor noise for
locomotion over rough, low-friction terrain on which the robot is subject to
foot slip and rotation. The proposed base state estimator which utilizes these
six DoF contact probability estimates is shown to perform considerably better
than that which determines kinematic contact constraints purely based on
measured normal force.Comment: Submitted to the IEEE International Conference on Robotics and
Automation (ICRA) 201
Unsupervised Contact Learning for Humanoid Estimation and Control
This work presents a method for contact state estimation using fuzzy
clustering to learn contact probability for full, six-dimensional humanoid
contacts. The data required for training is solely from proprioceptive sensors
- endeffector contact wrench sensors and inertial measurement units (IMUs) -
and the method is completely unsupervised. The resulting cluster means are used
to efficiently compute the probability of contact in each of the six
endeffector degrees of freedom (DoFs) independently. This clustering-based
contact probability estimator is validated in a kinematics-based base state
estimator in a simulation environment with realistic added sensor noise for
locomotion over rough, low-friction terrain on which the robot is subject to
foot slip and rotation. The proposed base state estimator which utilizes these
six DoF contact probability estimates is shown to perform considerably better
than that which determines kinematic contact constraints purely based on
measured normal force.Comment: Submitted to the IEEE International Conference on Robotics and
Automation (ICRA) 201
- âŠ