8 research outputs found

    Learning Motor Skills of Reactive Reaching and Grasping of Objects

    Get PDF
    Reactive grasping of objects is an essential capability of autonomous robot manipulation, which is yet challenging to learn such sensorimotor control to coordinate coherent hand-finger motions and be robust against disturbances and failures. This work proposed a deep reinforcement learning based scheme to train feedback control policies which can coordinate reaching and grasping actions in presence of uncertainties. We formulated geometric metrics and task-orientated quantities to design the reward, which enabled efficient exploration of grasping policies. Further, to improve the success rate, we deployed key initial states of difficult hand-finger poses to train policies to overcome potential failures due to challenging configurations. The extensive simulation validations and benchmarks demonstrated that the learned policy was robust to grasp both static and moving objects. Moreover, the policy generated successful failure recoveries within a short time in difficult configurations and was robust with synthetic noises in the state feedback which were unseen during training

    Learning Motor Skills of Reactive Reaching and Grasping of Objects

    Get PDF
    Reactive grasping of objects is an essential capability of autonomous robot manipulation, which is yet challenging to learn such sensorimotor control to coordinate coherent hand-finger motions and be robust against disturbances and failures. This work proposed a deep reinforcement learning based scheme to train feedback control policies which can coordinate reaching and grasping actions in presence of uncertainties. We formulated geometric metrics and task-orientated quantities to design the reward, which enabled efficient exploration of grasping policies. Further, to improve the success rate, we deployed key initial states of difficult hand-finger poses to train policies to overcome potential failures due to challenging configurations. The extensive simulation validations and benchmarks demonstrated that the learned policy was robust to grasp both static and moving objects. Moreover, the policy generated successful failure recoveries within a short time in difficult configurations and was robust with synthetic noises in the state feedback which were unseen during training

    Rearrangement with Nonprehensile Manipulation Using Deep Reinforcement Learning

    Full text link
    Rearranging objects on a tabletop surface by means of nonprehensile manipulation is a task which requires skillful interaction with the physical world. Usually, this is achieved by precisely modeling physical properties of the objects, robot, and the environment for explicit planning. In contrast, as explicitly modeling the physical environment is not always feasible and involves various uncertainties, we learn a nonprehensile rearrangement strategy with deep reinforcement learning based on only visual feedback. For this, we model the task with rewards and train a deep Q-network. Our potential field-based heuristic exploration strategy reduces the amount of collisions which lead to suboptimal outcomes and we actively balance the training set to avoid bias towards poor examples. Our training process leads to quicker learning and better performance on the task as compared to uniform exploration and standard experience replay. We demonstrate empirical evidence from simulation that our method leads to a success rate of 85%, show that our system can cope with sudden changes of the environment, and compare our performance with human level performance.Comment: 2018 International Conference on Robotics and Automatio

    Precision Grasp Planning for Integrated Arm-Hand Systems

    Get PDF
    The demographic shift has caused labor shortages across the world, and it seems inevitable to rely on robots more than ever to fill the widening gap in the workforce. The robotic replacement of human workers necessitates the ability of autonomous grasping as the most natural but rather a vital part of almost all activities. Among different types of grasping, fingertip grasping attracts much attention because of its superior performance for dexterous manipulation. This thesis contributes to autonomous fingertip grasping in four areas including hand-eye calibration, grasp quality evaluation, inverse kinematics (IK) solution of robotic arm-hand systems, and simultaneous achievement of grasp planning and IK solution. To initiate autonomous grasping, object perception is the first needed step. Stereo cameras are well-embraced for obtaining an object\u27s 3D model. However, the data acquired through a camera is expressed in the camera frame while robots only accept the commands encoded in the robot frame. This dilemma necessitates the calibration between the robot (hand) and the camera (eye) with the main goal is of estimating the camera\u27s relative pose to the robot end-effector so that the camera-acquired measurements can be converted into the robot frame. We first study the hand-eye calibration problem and achieve accurate results through a point set matching formulation. With the object\u27s 3D measurements expressed in the robot frame, the next step is finding an appropriate grasp configuration (contact points + contact normals) on the object\u27s surface. To this end, we present an efficient grasp quality evaluation method to calculate a popular wrench-based quality metric which measures the minimum distance between the wrench space origin (0⃗6×1\vec{0}_{6\times 1}) to the boundary of grasp wrench space (GWS). The proposed method mathematically expresses the exact boundary of GWS, which allows to evaluate the quality of the grasp with the speed that is desirable in most robotic applications. Having obtained a suitable grasp configuration, an accurate IK solution of the arm-hand system is required to perform the planned grasp. Conventionally, the IK of the robotic hand and arm are solved sequentially, which often affects the efficiency and accuracy of the IK solutions. To overcome this problem, we kinematically integrate the robotic arm and hand and propose a human-inspired Thumb-First strategy to narrow down the search space of the IK solution. Based on the Thumb-First strategy, we propose two IK solutions. Our first solution follows a hierarchical IK strategy, while our second solution formulates the arm-hand system as a hybrid parallel-serial system to achieve a higher success rate. Using these results, we propose an approach to integrate the process of grasp planning and IK solution by following a special-designed coarse-to-fine strategy to improve the overall efficiency of our approach
    corecore