707 research outputs found
A Whole-Body Pose Taxonomy for Loco-Manipulation Tasks
Exploiting interaction with the environment is a promising and powerful way
to enhance stability of humanoid robots and robustness while executing
locomotion and manipulation tasks. Recently some works have started to show
advances in this direction considering humanoid locomotion with multi-contacts,
but to be able to fully develop such abilities in a more autonomous way, we
need to first understand and classify the variety of possible poses a humanoid
robot can achieve to balance. To this end, we propose the adaptation of a
successful idea widely used in the field of robot grasping to the field of
humanoid balance with multi-contacts: a whole-body pose taxonomy classifying
the set of whole-body robot configurations that use the environment to enhance
stability. We have revised criteria of classification used to develop grasping
taxonomies, focusing on structuring and simplifying the large number of
possible poses the human body can adopt. We propose a taxonomy with 46 poses,
containing three main categories, considering number and type of supports as
well as possible transitions between poses. The taxonomy induces a
classification of motion primitives based on the pose used for support, and a
set of rules to store and generate new motions. We present preliminary results
that apply known segmentation techniques to motion data from the KIT whole-body
motion database. Using motion capture data with multi-contacts, we can identify
support poses providing a segmentation that can distinguish between locomotion
and manipulation parts of an action.Comment: 8 pages, 7 figures, 1 table with full page figure that appears in
landscape page, 2015 IEEE/RSJ International Conference on Intelligent Robots
and System
Analyzing Whole-Body Pose Transitions in Multi-Contact Motions
When executing whole-body motions, humans are able to use a large variety of
support poses which not only utilize the feet, but also hands, knees and elbows
to enhance stability. While there are many works analyzing the transitions
involved in walking, very few works analyze human motion where more complex
supports occur.
In this work, we analyze complex support pose transitions in human motion
involving locomotion and manipulation tasks (loco-manipulation). We have
applied a method for the detection of human support contacts from motion
capture data to a large-scale dataset of loco-manipulation motions involving
multi-contact supports, providing a semantic representation of them. Our
results provide a statistical analysis of the used support poses, their
transitions and the time spent in each of them. In addition, our data partially
validates our taxonomy of whole-body support poses presented in our previous
work.
We believe that this work extends our understanding of human motion for
humanoids, with a long-term objective of developing methods for autonomous
multi-contact motion planning.Comment: 8 pages, IEEE-RAS International Conference on Humanoid Robots
(Humanoids) 201
Analyzing Whole-Body Pose Transitions in Multi-Contact Motions
Abstract-When executing whole-body motions, humans are able to use a large variety of support poses which not only utilize the feet, but also hands, knees and elbows to enhance stability. While there are many works analyzing the transitions involved in walking, very few works analyze human motion where more complex supports occur. In this work, we analyze complex support pose transitions in human motion involving locomotion and manipulation tasks (loco-manipulation). We have applied a method for the detection of human support contacts from motion capture data to a largescale dataset of loco-manipulation motions involving multicontact supports, providing a semantic representation of them. Our results provide a statistical analysis of the used support poses, their transitions and the time spent in each of them. In addition, our data partially validates our taxonomy of wholebody support poses presented in our previous work. We believe that this work extends our understanding of human motion for humanoids, with a long-term objective of developing methods for autonomous multi-contact motion planning
Identifying important sensory feedback for learning locomotion skills
Robot motor skills can be acquired by deep reinforcement learning as neural networks to reflect stateâaction mapping. The selection of states has been demonstrated to be crucial for successful robot motor learning. However, because of the complexity of neural networks, human insights and engineering efforts are often required to select appropriate states through qualitative approaches, such as ablation studies, without a quantitative analysis of the state importance. Here we present a systematic saliency analysis that quantitatively evaluates the relative importance of different feedback states for motor skills learned through deep reinforcement learning. Our approach provides a guideline to identify the most essential feedback states for robot motor learning. By using only the important states including joint positions, gravity vector and base linear and angular velocities, we demonstrate that a simulated quadruped robot can learn various robust locomotion skills. We find that locomotion skills learned only with important states can achieve task performance comparable to the performance of those with more states. This work provides quantitative insights into the impacts of state observations on specific types of motor skills, enabling the learning of a wide range of motor skills with minimal sensing dependencies.</p
Identifying important sensory feedback for learning locomotion skills
Robot motor skills can be acquired by deep reinforcement learning as
neural networks to refect stateâaction mapping. The selection of states
has been demonstrated to be crucial for successful robot motor learning.
However, because of the complexity of neural networks, human insights and
engineering eforts are often required to select appropriate states through
qualitative approaches, such as ablation studies, without a quantitative
analysis of the state importance. Here we present a systematic saliency
analysis that quantitatively evaluates the relative importance of diferent
feedback states for motor skills learned through deep reinforcement
learning. Our approach provides a guideline to identify the most essential
feedback states for robot motor learning. By using only the important
states including joint positions, gravity vector and base linear and angular
velocities, we demonstrate that a simulated quadruped robot can learn
various robust locomotion skills. We fnd that locomotion skills learned
only with important states can achieve task performance comparable to
the performance of those with more states. This work provides quantitative
insights into the impacts of state observations on specifc types of motor
skills, enabling the learning of a wide range of motor skills with minimal
sensing dependencies
SCALER: Versatile Multi-Limbed Robot for Free-Climbing in Extreme Terrains
This paper presents SCALER, a versatile free-climbing multi-limbed robot that
is designed to achieve tightly coupled simultaneous locomotion and dexterous
grasping. Although existing quadruped-limbed robots have shown impressive
dexterous skills such as object manipulation, it is essential to balance
power-intensive locomotion and dexterous grasping capabilities. We design a
torso linkage and a parallel-serial limb to meet such conflicting skills that
pose unique challenges in the hardware designs. SCALER employs underactuated
two-fingered GOAT grippers that can mechanically adapt and offer 7 modes of
grasping, enabling SCALER to traverse extreme terrains with multi-modal
grasping strategies. We study the whole-body approach, where SCALER uses its
body and limbs to generate additional forces for stable grasping with
environments, further enhancing versatility. Furthermore, we improve the GOAT
gripper actuation speed to realize more dynamic climbing in a closed-loop
control fashion. With these proposed technologies, SCALER can traverse
vertical, overhang, upside-down, slippery terrains, and bouldering walls with
non-convex-shaped climbing holds under the Earth's gravity
Identifying Important Sensory Feedback for Learning Locomotion Skills
Robot motor skills can be learned through deep reinforcement learning (DRL)
by neural networks as state-action mappings. While the selection of state
observations is crucial, there has been a lack of quantitative analysis to
date. Here, we present a systematic saliency analysis that quantitatively
evaluates the relative importance of different feedback states for motor skills
learned through DRL. Our approach can identify the most essential feedback
states for locomotion skills, including balance recovery, trotting, bounding,
pacing and galloping. By using only key states including joint positions,
gravity vector, base linear and angular velocities, we demonstrate that a
simulated quadruped robot can achieve robust performance in various test
scenarios across these distinct skills. The benchmarks using task performance
metrics show that locomotion skills learned with key states can achieve
comparable performance to those with all states, and the task performance or
learning success rate will drop significantly if key states are missing. This
work provides quantitative insights into the relationship between state
observations and specific types of motor skills, serving as a guideline for
robot motor learning. The proposed method is applicable to differentiable
state-action mapping, such as neural network based control policies, enabling
the learning of a wide range of motor skills with minimal sensing dependencies
Accessibility-Based Clustering for Efficient Learning of Locomotion Skills
For model-free deep reinforcement learning of quadruped locomotion, the initialization of robot configurations is crucial for data efficiency and robustness. This work focuses on algorithmic improvements of data efficiency and robustness simultaneously through automatic discovery of initial states, which is achieved by our proposed K-Access algorithm based on accessibility metrics. Specifically, we formulated accessibility metrics to measure the difficulty of transitions between two arbitrary states, and proposed a novel K-Access algorithm for state-space clustering that automatically discovers the centroids of the static-pose clusters based on the accessibility metrics. By using the discovered centroidal static poses as the initial states, we can improve data efficiency by reducing redundant explorations, and enhance the robustness by more effective explorations from the centroids to sampled poses. Focusing on fall recovery as a very hard set of locomotion skills, we validated our method extensively using an 8-DoF quadrupedal robot Bittle. Compared to the baselines, the learning curve of our method converges much faster, requiring only 60% of training episodes. With our method, the robot can successfully recover to standing poses within 3 seconds in 99.4% of the test cases. Moreover, the method can generalize to other difficult skills successfully, such as backflipping.</p
Accessibility-Based Clustering for Efficient Learning of Locomotion Skills
For model-free deep reinforcement learning of quadruped locomotion, the initialization of robot configurations is crucial for data efficiency and robustness. This work focuses on algorithmic improvements of data efficiency and robustness simultaneously through automatic discovery of initial states, which is achieved by our proposed K-Access algorithm based on accessibility metrics. Specifically, we formulated accessibility metrics to measure the difficulty of transitions between two arbitrary states, and proposed a novel K-Access algorithm for state-space clustering that automatically discovers the centroids of the static-pose clusters based on the accessibility metrics. By using the discovered centroidal static poses as the initial states, we can improve data efficiency by reducing redundant explorations, and enhance the robustness by more effective explorations from the centroids to sampled poses. Focusing on fall recovery as a very hard set of locomotion skills, we validated our method extensively using an 8-DoF quadrupedal robot Bittle. Compared to the baselines, the learning curve of our method converges much faster, requiring only 60% of training episodes. With our method, the robot can successfully recover to standing poses within 3 seconds in 99.4% of the test cases. Moreover, the method can generalize to other difficult skills successfully, such as backflipping
- âŠ