6 research outputs found
XSkill: Cross Embodiment Skill Discovery
Human demonstration videos are a widely available data source for robot
learning and an intuitive user interface for expressing desired behavior.
However, directly extracting reusable robot manipulation skills from
unstructured human videos is challenging due to the big embodiment difference
and unobserved action parameters. To bridge this embodiment gap, this paper
introduces XSkill, an imitation learning framework that 1) discovers a
cross-embodiment representation called skill prototypes purely from unlabeled
human and robot manipulation videos, 2) transfers the skill representation to
robot actions using conditional diffusion policy, and finally, 3) composes the
learned skill to accomplish unseen tasks specified by a human prompt video. Our
experiments in simulation and real-world environments show that the discovered
skill prototypes facilitate both skill transfer and composition for unseen
tasks, resulting in a more general and scalable imitation learning framework.
The benchmark, code, and qualitative results are on
https://xskill.cs.columbia.edu
Sequential Dexterity: Chaining Dexterous Policies for Long-Horizon Manipulation
Many real-world manipulation tasks consist of a series of subtasks that are
significantly different from one another. Such long-horizon, complex tasks
highlight the potential of dexterous hands, which possess adaptability and
versatility, capable of seamlessly transitioning between different modes of
functionality without the need for re-grasping or external tools. However, the
challenges arise due to the high-dimensional action space of dexterous hand and
complex compositional dynamics of the long-horizon tasks. We present Sequential
Dexterity, a general system based on reinforcement learning (RL) that chains
multiple dexterous policies for achieving long-horizon task goals. The core of
the system is a transition feasibility function that progressively finetunes
the sub-policies for enhancing chaining success rate, while also enables
autonomous policy-switching for recovery from failures and bypassing redundant
stages. Despite being trained only in simulation with a few task objects, our
system demonstrates generalization capability to novel object shapes and is
able to zero-shot transfer to a real-world robot equipped with a dexterous
hand. More details and video results could be found at
https://sequential-dexterity.github.ioComment: CoRL 202
Learning Control Policies for Fall Prevention and Safety in Bipedal Locomotion
The ability to recover from an unexpected external perturbation is a fundamental motor skill in bipedal locomotion. An effective response includes the ability to not just recover balance and maintain stability but also to fall in a safe manner when balance recovery is physically infeasible. For robots associated with bipedal locomotion, such as humanoid robots and assistive robotic devices that aid humans in walking, designing controllers which can provide this stability and safety can prevent damage to robots or prevent injury related medical costs. This is a challenging task because it involves generating highly dynamic motion for a high-dimensional, non-linear and under-actuated system with contacts. Despite prior advancements in using model-based and optimization methods, challenges such as requirement of extensive domain knowledge, relatively large computational time and limited robustness to changes in dynamics still make this an open problem.
In this thesis, to address these issues we develop learning-based algorithms capable of synthesizing push recovery control policies for two different kinds of robots : Humanoid robots and assistive robotic devices that assist in bipedal locomotion. Our work can be branched into two closely related directions : 1) Learning safe falling and fall prevention strategies for humanoid robots and 2) Learning fall prevention strategies for humans using a robotic assistive devices. To achieve this, we introduce a set of Deep Reinforcement Learning (DRL) algorithms to learn control policies that improve safety while using these robots. To enable efficient learning, we present techniques to incorporate abstract dynamical models, curriculum learning and a novel method of building a graph of policies into the learning framework. We also propose an approach to create virtual human walking agents which exhibit similar gait characteristics to real-world human subjects, using which, we learn an assistive device controller to help virtual human return to steady state walking after an external push is applied.
Finally, we extend our work on assistive devices and address the challenge of transferring a push-recovery policy to different individuals. As walking and recovery characteristics differ significantly between individuals, exoskeleton policies have to be fine-tuned for each person which is a tedious, time consuming and potentially unsafe process. We propose to solve this by posing it as a transfer learning problem, where a policy trained for one individual can adapt to another without fine tuning.Ph.D