1,179 research outputs found

    Multimodal human hand motion sensing and analysis - a review

    Get PDF

    Dexterous Soft Hands Linearize Feedback-Control for In-Hand Manipulation

    Full text link
    This paper presents a feedback-control framework for in-hand manipulation (IHM) with dexterous soft hands that enables the acquisition of manipulation skills in the real-world within minutes. We choose the deformation state of the soft hand as the control variable. To control for a desired deformation state, we use coarsely approximated Jacobians of the actuation-deformation dynamics. These Jacobian are obtained via explorative actions. This is enabled by the self-stabilizing properties of compliant hands, which allow us to use linear feedback control in the presence of complex contact dynamics. To evaluate the effectiveness of our approach, we show the generalization capabilities for a learned manipulation skill to variations in object size by 100 %, 360 degree changes in palm inclination and to disabling up to 50 % of the involved actuators. In addition, complex manipulations can be obtained by sequencing such feedback-skills.Comment: Accepted at 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Sequential Dexterity: Chaining Dexterous Policies for Long-Horizon Manipulation

    Full text link
    Many real-world manipulation tasks consist of a series of subtasks that are significantly different from one another. Such long-horizon, complex tasks highlight the potential of dexterous hands, which possess adaptability and versatility, capable of seamlessly transitioning between different modes of functionality without the need for re-grasping or external tools. However, the challenges arise due to the high-dimensional action space of dexterous hand and complex compositional dynamics of the long-horizon tasks. We present Sequential Dexterity, a general system based on reinforcement learning (RL) that chains multiple dexterous policies for achieving long-horizon task goals. The core of the system is a transition feasibility function that progressively finetunes the sub-policies for enhancing chaining success rate, while also enables autonomous policy-switching for recovery from failures and bypassing redundant stages. Despite being trained only in simulation with a few task objects, our system demonstrates generalization capability to novel object shapes and is able to zero-shot transfer to a real-world robot equipped with a dexterous hand. More details and video results could be found at https://sequential-dexterity.github.ioComment: CoRL 202

    Getting the Ball Rolling: Learning a Dexterous Policy for a Biomimetic Tendon-Driven Hand with Rolling Contact Joints

    Full text link
    Biomimetic, dexterous robotic hands have the potential to replicate much of the tasks that a human can do, and to achieve status as a general manipulation platform. Recent advances in reinforcement learning (RL) frameworks have achieved remarkable performance in quadrupedal locomotion and dexterous manipulation tasks. Combined with GPU-based highly parallelized simulations capable of simulating thousands of robots in parallel, RL-based controllers have become more scalable and approachable. However, in order to bring RL-trained policies to the real world, we require training frameworks that output policies that can work with physical actuators and sensors as well as a hardware platform that can be manufactured with accessible materials yet is robust enough to run interactive policies. This work introduces the biomimetic tendon-driven Faive Hand and its system architecture, which uses tendon-driven rolling contact joints to achieve a 3D printable, robust high-DoF hand design. We model each element of the hand and integrate it into a GPU simulation environment to train a policy with RL, and achieve zero-shot transfer of a dexterous in-hand sphere rotation skill to the physical robot hand.Comment: for project website, see https://srl-ethz.github.io/get-ball-rolling/ . for video, see https://youtu.be/YahsMhqNU8o . Submitted to the 2023 IEEE-RAS International Conference on Humanoid Robot

    MyoDex: A Generalizable Prior for Dexterous Manipulation

    Full text link
    Human dexterity is a hallmark of motor control. Our hands can rapidly synthesize new behaviors despite the complexity (multi-articular and multi-joints, with 23 joints controlled by more than 40 muscles) of musculoskeletal sensory-motor circuits. In this work, we take inspiration from how human dexterity builds on a diversity of prior experiences, instead of being acquired through a single task. Motivated by this observation, we set out to develop agents that can build upon their previous experience to quickly acquire new (previously unattainable) behaviors. Specifically, our approach leverages multi-task learning to implicitly capture task-agnostic behavioral priors (MyoDex) for human-like dexterity, using a physiologically realistic human hand model - MyoHand. We demonstrate MyoDex's effectiveness in few-shot generalization as well as positive transfer to a large repertoire of unseen dexterous manipulation tasks. Agents leveraging MyoDex can solve approximately 3x more tasks, and 4x faster in comparison to a distillation baseline. While prior work has synthesized single musculoskeletal control behaviors, MyoDex is the first generalizable manipulation prior that catalyzes the learning of dexterous physiological control across a large variety of contact-rich behaviors. We also demonstrate the effectiveness of our paradigms beyond musculoskeletal control towards the acquisition of dexterity in 24 DoF Adroit Hand. Website: https://sites.google.com/view/myodexComment: Accepted to the 40th International Conference on Machine Learning (2023

    Dynamic Handover: Throw and Catch with Bimanual Hands

    Full text link
    Humans throw and catch objects all the time. However, such a seemingly common skill introduces a lot of challenges for robots to achieve: The robots need to operate such dynamic actions at high-speed, collaborate precisely, and interact with diverse objects. In this paper, we design a system with two multi-finger hands attached to robot arms to solve this problem. We train our system using Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer to deploy on the real robots. To overcome the Sim2Real gap, we provide multiple novel algorithm designs including learning a trajectory prediction model for the object. Such a model can help the robot catcher has a real-time estimation of where the object will be heading, and then react accordingly. We conduct our experiments with multiple objects in the real-world system, and show significant improvements over multiple baselines. Our project page is available at \url{https://binghao-huang.github.io/dynamic_handover/}.Comment: Accepted at CoRL 2023. https://binghao-huang.github.io/dynamic_handover
    • …
    corecore