397 research outputs found
Plan-Guided Reinforcement Learning for Whole-Body Manipulation
Synthesizing complex whole-body manipulation behaviors has fundamental
challenges due to the rapidly growing combinatorics inherent to contact
interaction planning. While model-based methods have shown promising results in
solving long-horizon manipulation tasks, they often work under strict
assumptions, such as known model parameters, oracular observation of the
environment state, and simplified dynamics, resulting in plans that cannot
easily transfer to hardware. Learning-based approaches, such as imitation
learning (IL) and reinforcement learning (RL), have been shown to be robust
when operating over in-distribution states; however, they need heavy human
supervision. Specifically, model-free RL requires a tedious reward-shaping
process. IL methods, on the other hand, rely on human demonstrations that
involve advanced teleoperation methods. In this work, we propose a plan-guided
reinforcement learning (PGRL) framework to combine the advantages of
model-based planning and reinforcement learning. Our method requires minimal
human supervision because it relies on plans generated by model-based planners
to guide the exploration in RL. In exchange, RL derives a more robust policy
thanks to domain randomization. We test this approach on a whole-body
manipulation task on Punyo, an upper-body humanoid robot with compliant,
air-filled arm coverings, to pivot and lift a large box. Our preliminary
results indicate that the proposed methodology is promising to address
challenges that remain difficult for either model- or learning-based strategies
alone.Comment: 4 pages, 4 figure
Learning for a robot:deep reinforcement learning, imitation learning, transfer learning
Dexterous manipulation of the robot is an important part of realizing intelligence, but manipulators can only perform simple tasks such as sorting and packing in a structured environment. In view of the existing problem, this paper presents a state-of-the-art survey on an intelligent robot with the capability of autonomous deciding and learning. The paper first reviews the main achievements and research of the robot, which were mainly based on the breakthrough of automatic control and hardware in mechanics. With the evolution of artificial intelligence, many pieces of research have made further progresses in adaptive and robust control. The survey reveals that the latest research in deep learning and reinforcement learning has paved the way for highly complex tasks to be performed by robots. Furthermore, deep reinforcement learning, imitation learning, and transfer learning in robot control are discussed in detail. Finally, major achievements based on these methods are summarized and analyzed thoroughly, and future research challenges are proposed
DEFT: Dexterous Fine-Tuning for Real-World Hand Policies
Dexterity is often seen as a cornerstone of complex manipulation. Humans are
able to perform a host of skills with their hands, from making food to
operating tools. In this paper, we investigate these challenges, especially in
the case of soft, deformable objects as well as complex, relatively
long-horizon tasks. However, learning such behaviors from scratch can be data
inefficient. To circumvent this, we propose a novel approach, DEFT (DExterous
Fine-Tuning for Hand Policies), that leverages human-driven priors, which are
executed directly in the real world. In order to improve upon these priors,
DEFT involves an efficient online optimization procedure. With the integration
of human-based learning and online fine-tuning, coupled with a soft robotic
hand, DEFT demonstrates success across various tasks, establishing a robust,
data-efficient pathway toward general dexterous manipulation. Please see our
website at https://dexterous-finetuning.github.io for video results.Comment: In CoRL 2023. Website at https://dexterous-finetuning.github.io
Learning Generalizable Dexterous Manipulation from Human Grasp Affordance
Dexterous manipulation with a multi-finger hand is one of the most
challenging problems in robotics. While recent progress in imitation learning
has largely improved the sample efficiency compared to Reinforcement Learning,
the learned policy can hardly generalize to manipulate novel objects, given
limited expert demonstrations. In this paper, we propose to learn dexterous
manipulation using large-scale demonstrations with diverse 3D objects in a
category, which are generated from a human grasp affordance model. This
generalizes the policy to novel object instances within the same category. To
train the policy, we propose a novel imitation learning objective jointly with
a geometric representation learning objective using our demonstrations. By
experimenting with relocating diverse objects in simulation, we show that our
approach outperforms baselines with a large margin when manipulating novel
objects. We also ablate the importance on 3D object representation learning for
manipulation. We include videos, code, and additional information on the
project website - https://kristery.github.io/ILAD/ .Comment: project page: https://kristery.github.io/ILAD
- …