304 research outputs found

    Learning Continuous Grasping Function with a Dexterous Hand from Human Demonstrations

    Full text link
    We propose to learn to generate grasping motion for manipulation with a dexterous hand using implicit functions. With continuous time inputs, the model can generate a continuous and smooth grasping plan. We name the proposed model Continuous Grasping Function (CGF). CGF is learned via generative modeling with a Conditional Variational Autoencoder using 3D human demonstrations. We will first convert the large-scale human-object interaction trajectories to robot demonstrations via motion retargeting, and then use these demonstrations to train CGF. During inference, we perform sampling with CGF to generate different grasping plans in the simulator and select the successful ones to transfer to the real robot. By training on diverse human data, our CGF allows generalization to manipulate multiple objects. Compared to previous planning algorithms, CGF is more efficient and achieves significant improvement on success rate when transferred to grasping with the real Allegro Hand. Our project page is at https://jianglongye.com/cgf .Comment: Project page: https://jianglongye.com/cg

    In-Hand Object Rotation via Rapid Motor Adaptation

    Full text link
    Generalized in-hand manipulation has long been an unsolved challenge of robotics. As a small step towards this grand goal, we demonstrate how to design and learn a simple adaptive controller to achieve in-hand object rotation using only fingertips. The controller is trained entirely in simulation on only cylindrical objects, which then - without any fine-tuning - can be directly deployed to a real robot hand to rotate dozens of objects with diverse sizes, shapes, and weights over the z-axis. This is achieved via rapid online adaptation of the controller to the object properties using only proprioception history. Furthermore, natural and stable finger gaits automatically emerge from training the control policy via reinforcement learning. Code and more videos are available at https://haozhi.io/horaComment: CoRL 2022. Code and Website: https://haozhi.io/hor

    General In-Hand Object Rotation with Vision and Touch

    Full text link
    We introduce RotateIt, a system that enables fingertip-based object rotation along multiple axes by leveraging multimodal sensory inputs. Our system is trained in simulation, where it has access to ground-truth object shapes and physical properties. Then we distill it to operate on realistic yet noisy simulated visuotactile and proprioceptive sensory inputs. These multimodal inputs are fused via a visuotactile transformer, enabling online inference of object shapes and physical properties during deployment. We show significant performance improvements over prior methods and the importance of visual and tactile sensing.Comment: CoRL 2023; Website: https://haozhi.io/rotateit

    Grasp Multiple Objects with One Hand

    Full text link
    The human hand's complex kinematics allow for simultaneous grasping and manipulation of multiple objects, essential for tasks like object transfer and in-hand manipulation. Despite its importance, robotic multi-object grasping remains underexplored and presents challenges in kinematics, dynamics, and object configurations. This paper introduces MultiGrasp, a two-stage method for multi-object grasping on a tabletop with a multi-finger dexterous hand. It involves (i) generating pre-grasp proposals and (ii) executing the grasp and lifting the objects. Experimental results primarily focus on dual-object grasping and report a 44.13% success rate, showcasing adaptability to unseen object configurations and imprecise grasps. The framework also demonstrates the capability to grasp more than two objects, albeit at a reduced inference speed

    Extrinsic Dexterity: In-Hand Manipulation with External Forces

    Get PDF
    Abstract — “In-hand manipulation ” is the ability to reposition an object in the hand, for example when adjusting the grasp of a hammer before hammering a nail. The common approach to in-hand manipulation with robotic hands, known as dexterous manipulation [1], is to hold an object within the fingertips of the hand and wiggle the fingers, or walk them along the object’s surface. Dexterous manipulation, however, is just one of the many techniques available to the robot. The robot can also roll the object in the hand by using gravity, or adjust the object’s pose by pressing it against a surface, or if fast enough, it can even toss the object in the air and catch it in a different pose. All these techniques have one thing in common: they rely on resources extrinsic to the hand, either gravity, external contacts or dynamic arm motions. We refer to them as “extrinsic dexterity”. In this paper we study extrinsic dexterity in the context of regrasp operations, for example when switching from a power to a precision grasp, and we demonstrate that even simple grippers are capable of ample in-hand manipulation. We develop twelve regrasp actions, all open-loop and handscripted, and evaluate their effectiveness with over 1200 trials of regrasps and sequences of regrasps, for three different objects (see video [2]). The long-term goal of this work is to develop a general repertoire of these behaviors, and to understand how such a repertoire might eventually constitute a general-purpose in-hand manipulation capability. I

    Progressive Transfer Learning for Dexterous In-Hand Manipulation with Multi-Fingered Anthropomorphic Hand

    Full text link
    Dexterous in-hand manipulation for a multi-fingered anthropomorphic hand is extremely difficult because of the high-dimensional state and action spaces, rich contact patterns between the fingers and objects. Even though deep reinforcement learning has made moderate progress and demonstrated its strong potential for manipulation, it is still faced with certain challenges, such as large-scale data collection and high sample complexity. Especially, for some slight change scenes, it always needs to re-collect vast amounts of data and carry out numerous iterations of fine-tuning. Remarkably, humans can quickly transfer learned manipulation skills to different scenarios with little supervision. Inspired by human flexible transfer learning capability, we propose a novel dexterous in-hand manipulation progressive transfer learning framework (PTL) based on efficiently utilizing the collected trajectories and the source-trained dynamics model. This framework adopts progressive neural networks for dynamics model transfer learning on samples selected by a new samples selection method based on dynamics properties, rewards and scores of the trajectories. Experimental results on contact-rich anthropomorphic hand manipulation tasks show that our method can efficiently and effectively learn in-hand manipulation skills with a few online attempts and adjustment learning under the new scene. Compared to learning from scratch, our method can reduce training time costs by 95%.Comment: 12 pages, 7 figures, submitted to TNNL
    • …
    corecore