20 research outputs found

    The Multi-fingered Kinematic Model for Dual-arm Manipulation

    Full text link
    Bimanual manipulation needs robots to be sensitive on the grasp force which is hard to be accurately detected. This paper proposes RL framework for enhancing the grasp quality during the bimanual manipulation. This framework is based on finger configurations and its feedback. After that, the grasp quality is evaluated by the reward mechanism for the hands to determine strategies. There are 2 strategies, simultaneous and interleaved strategies, which will be determined in this framework to manipulate objects. In this paper, the contour and centroid of objects to the robot are unknown. Through the RL framework, robots can perceive hand-object relation and then optimize fingers configurations. The simulations and experiments showed that this framework can improve the success rates and finger motion accuracy.Comment: arXiv admin note: text overlap with arXiv:2401.0661

    Dexterous manipulation of unknown objects using virtual contact points

    Get PDF
    The manipulation of unknown objects is a problem of special interest in robotics since it is not always possible to have exact models of the objects with which the robot interacts. This paper presents a simple strategy to manipulate unknown objects using a robotic hand equipped with tactile sensors. The hand configurations that allow the rotation of an unknown object are computed using only tactile and kinematic information, obtained during the manipulation process and reasoning about the desired and real positions of the fingertips during the manipulation. This is done taking into account that the desired positions of the fingertips are not physically reachable since they are located in the interior of the manipulated object and therefore they are virtual positions with associated virtual contact points. The proposed approach was satisfactorily validated using three fingers of an anthropomorphic robotic hand (Allegro Hand), with the original fingertips replaced by tactile sensors (WTS-FT). In the experimental validation, several everyday objects with different shapes were successfully manipulated, rotating them without the need of knowing their shape or any other physical property.Peer ReviewedPostprint (author's final draft

    Prehensile Pushing: In-hand Manipulation with Push-Primitives

    Get PDF
    This paper explores the manipulation of a grasped object by pushing it against its environment. Relying on precise arm motions and detailed models of frictional contact, prehensile pushing enables dexterous manipulation with simple manipulators, such as those currently available in industrial settings, and those likely affordable by service and field robots. This paper is concerned with the mechanics of the forceful interaction between a gripper, a grasped object, and its environment. In particular, we describe the quasi-dynamic motion of an object held by a set of point, line, or planar rigid frictional contacts and forced by an external pusher (the environment). Our model predicts the force required by the external pusher to “break” the equilibrium of the grasp and estimates the instantaneous motion of the object in the grasp. It also captures interesting behaviors such as the constraining effect of line or planar contacts and the guiding effect of the pusher’s motion on the objects’s motion. We evaluate the algorithm with three primitive prehensile pushing actions—straight sliding, pivoting, and rolling—with the potential to combine into a broader in-hand manipulation capability.National Science Foundation (U.S.). National Robotics Initiative (Award NSF-IIS-1427050)Karl Chang Innovation Fund Awar

    Learning Grasp Strategies Composed of Contact Relative Motions

    Get PDF
    Of central importance to grasp synthesis algorithms are the assumptions made about the object to be grasped and the sensory information that is available. Many approaches avoid the issue of sensing entirely by assuming that complete information is available. In contrast, this paper proposes an approach to grasp synthesis expressed in terms of units of control that simultaneously change the contact configuration and sense information about the object and the relative manipulator-object pose. These units of control, known as contact relative motions (CRMs), allow the grasp synthesis problem to be recast as an optimal control problem where the goal is to find a strategy for executing CRMs that leads to a grasp in the shortest number of steps. An experiment is described that uses Robonaut, the NASA-JSC space humanoid, to show that CRMs are a viable means of synthesizing grasps. However, because of the limited amount of information that a single CRM can sense, the optimal control problem may be partially observable. This paper proposes expressing the problem as a k-order Markov Decision Process (MDP) and solving it using Reinforcement Learning. This approach is tested in a simulation of a two-contact manipulator that learns to grasp an object. Grasp strategies learned in simulation are tested on the physical Robonaut platform and found to lead to grasp configurations consistently

    General In-Hand Object Rotation with Vision and Touch

    Full text link
    We introduce RotateIt, a system that enables fingertip-based object rotation along multiple axes by leveraging multimodal sensory inputs. Our system is trained in simulation, where it has access to ground-truth object shapes and physical properties. Then we distill it to operate on realistic yet noisy simulated visuotactile and proprioceptive sensory inputs. These multimodal inputs are fused via a visuotactile transformer, enabling online inference of object shapes and physical properties during deployment. We show significant performance improvements over prior methods and the importance of visual and tactile sensing.Comment: CoRL 2023; Website: https://haozhi.io/rotateit

    In-Hand Object Rotation via Rapid Motor Adaptation

    Full text link
    Generalized in-hand manipulation has long been an unsolved challenge of robotics. As a small step towards this grand goal, we demonstrate how to design and learn a simple adaptive controller to achieve in-hand object rotation using only fingertips. The controller is trained entirely in simulation on only cylindrical objects, which then - without any fine-tuning - can be directly deployed to a real robot hand to rotate dozens of objects with diverse sizes, shapes, and weights over the z-axis. This is achieved via rapid online adaptation of the controller to the object properties using only proprioception history. Furthermore, natural and stable finger gaits automatically emerge from training the control policy via reinforcement learning. Code and more videos are available at https://haozhi.io/horaComment: CoRL 2022. Code and Website: https://haozhi.io/hor
    corecore