100 research outputs found
A Generative Human-Robot Motion Retargeting Approach Using a Single RGBD Sensor
The goal of human-robot motion retargeting is to let a robot follow the movements performed by a human subject. Typically in previous approaches, the human poses are precomputed from a human pose tracking system, after which the explicit joint mapping strategies are specified to apply the estimated poses to a target robot. However, there is not any generic mapping strategy that we can use to map the human joint to robots with different kinds of configurations. In this paper, we present a novel motion retargeting approach that combines the human pose estimation and the motion retargeting procedure in a unified generative framework without relying on any explicit mapping. First, a 3D parametric human-robot (HUMROB) model is proposed which has the specific joint and stability configurations as the target robot while its shape conforms the source human subject. The robot configurations, including its skeleton proportions, joint limitations, and DoFs are enforced in the HUMROB model and get preserved during the tracking procedure. Using a single RGBD camera to monitor human pose, we use the raw RGB and depth sequence as input. The HUMROB model is deformed to fit the input point cloud, from which the joint angle of the model is calculated and applied to the target robots for retargeting. In this way, instead of fitted individually for each joint, we will get the joint angle of the robot fitted globally so that the surface of the deformed model is as consistent as possible to the input point cloud. In the end, no explicit or pre-defined joint mapping strategies are needed. To demonstrate its effectiveness for human-robot motion retargeting, the approach is tested under both simulations and on real robots which have a quite different skeleton configurations and joint degree of freedoms (DoFs) as compared with the source human subjects
Generalized Anthropomorphic Functional Grasping with Minimal Demonstrations
This article investigates the challenge of achieving functional tool-use
grasping with high-DoF anthropomorphic hands, with the aim of enabling
anthropomorphic hands to perform tasks that require human-like manipulation and
tool-use. However, accomplishing human-like grasping in real robots present
many challenges, including obtaining diverse functional grasps for a wide
variety of objects, handling generalization ability for kinematically diverse
robot hands and precisely completing object shapes from a single-view
perception. To tackle these challenges, we propose a six-step grasp synthesis
algorithm based on fine-grained contact modeling that generates physically
plausible and human-like functional grasps for category-level objects with
minimal human demonstrations. With the contact-based optimization and learned
dense shape correspondence, the proposed algorithm is adaptable to various
objects in same category and a board range of robot hand models. To further
demonstrate the robustness of the framework, over 10K functional grasps are
synthesized to train our neural network, named DexFG-Net, which generates
diverse sets of human-like functional grasps based on the reconstructed object
model produced by a shape completion module. The proposed framework is
extensively validated in simulation and on a real robot platform. Simulation
experiments demonstrate that our method outperforms baseline methods by a large
margin in terms of grasp functionality and success rate. Real robot experiments
show that our method achieved an overall success rate of 79\% and 68\% for
tool-use grasp on 3-D printed and real test objects, respectively, using a
5-Finger Schunk Hand. The experimental results indicate a step towards
human-like grasping with anthropomorphic hands.Comment: 20 pages, 23 figures and 7 table
Lifelike Agility and Play on Quadrupedal Robots using Reinforcement Learning and Generative Pre-trained Models
Summarizing knowledge from animals and human beings inspires robotic
innovations. In this work, we propose a framework for driving legged robots act
like real animals with lifelike agility and strategy in complex environments.
Inspired by large pre-trained models witnessed with impressive performance in
language and image understanding, we introduce the power of advanced deep
generative models to produce motor control signals stimulating legged robots to
act like real animals. Unlike conventional controllers and end-to-end RL
methods that are task-specific, we propose to pre-train generative models over
animal motion datasets to preserve expressive knowledge of animal behavior. The
pre-trained model holds sufficient primitive-level knowledge yet is
environment-agnostic. It is then reused for a successive stage of learning to
align with the environments by traversing a number of challenging obstacles
that are rarely considered in previous approaches, including creeping through
narrow spaces, jumping over hurdles, freerunning over scattered blocks, etc.
Finally, a task-specific controller is trained to solve complex downstream
tasks by reusing the knowledge from previous stages. Enriching the knowledge
regarding each stage does not affect the usage of other levels of knowledge.
This flexible framework offers the possibility of continual knowledge
accumulation at different levels. We successfully apply the trained multi-level
controllers to the MAX robot, a quadrupedal robot developed in-house, to mimic
animals, traverse complex obstacles, and play in a designed challenging
multi-agent Chase Tag Game, where lifelike agility and strategy emerge on the
robots. The present research pushes the frontier of robot control with new
insights on reusing multi-level pre-trained knowledge and solving highly
complex downstream tasks in the real world
- …