24 research outputs found

    More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

    Full text link
    For humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work, however, has been based only on visual input, and thus cannot easily benefit from feedback after initiating contact. In this paper, we investigate how a robot can learn to use tactile information to iteratively and efficiently adjust its grasp. To this end, we propose an end-to-end action-conditional model that learns regrasping policies from raw visuo-tactile data. This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions. Our approach requires neither calibration of the tactile sensors, nor any analytical modeling of contact forces, thus reducing the engineering effort required to obtain efficient grasping policies. We train our model with data from about 6,450 grasping trials on a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger. Across extensive experiments, our approach outperforms a variety of baselines at (i) estimating grasp adjustment outcomes, (ii) selecting efficient grasp adjustments for quick grasping, and (iii) reducing the amount of force applied at the fingers, while maintaining competitive performance. Finally, we study the choices made by our model and show that it has successfully acquired useful and interpretable grasping behaviors.Comment: 8 pages. Published on IEEE Robotics and Automation Letters (RAL). Website: https://sites.google.com/view/more-than-a-feelin

    Haptic search with the Smart Suction Cup on adversarial objects

    Full text link
    Suction cups are an important gripper type in industrial robot applications, and prior literature focuses on using vision-based planners to improve grasping success in these tasks. Vision-based planners can fail due to adversarial objects or lose generalizability for unseen scenarios, without retraining learned algorithms. We propose haptic exploration to improve suction cup grasping when visual grasp planners fail. We present the Smart Suction Cup, an end-effector that utilizes internal flow measurements for tactile sensing. We show that model-based haptic search methods, guided by these flow measurements, improve grasping success by up to 2.5x as compared with using only a vision planner during a bin-picking task. In characterizing the Smart Suction Cup on both geometric edges and curves, we find that flow rate can accurately predict the ideal motion direction even with large postural errors. The Smart Suction Cup includes no electronics on the cup itself, such that the design is easy to fabricate and haptic exploration does not damage the sensor. This work motivates the use of suction cups with autonomous haptic search capabilities in especially adversarial scenarios

    simPLE: a visuotactile method learned in simulation to precisely pick, localize, regrasp, and place objects

    Full text link
    Existing robotic systems have a clear tension between generality and precision. Deployed solutions for robotic manipulation tend to fall into the paradigm of one robot solving a single task, lacking precise generalization, i.e., the ability to solve many tasks without compromising on precision. This paper explores solutions for precise and general pick-and-place. In precise pick-and-place, i.e. kitting, the robot transforms an unstructured arrangement of objects into an organized arrangement, which can facilitate further manipulation. We propose simPLE (simulation to Pick Localize and PLacE) as a solution to precise pick-and-place. simPLE learns to pick, regrasp and place objects precisely, given only the object CAD model and no prior experience. We develop three main components: task-aware grasping, visuotactile perception, and regrasp planning. Task-aware grasping computes affordances of grasps that are stable, observable, and favorable to placing. The visuotactile perception model relies on matching real observations against a set of simulated ones through supervised learning. Finally, we compute the desired robot motion by solving a shortest path problem on a graph of hand-to-hand regrasps. On a dual-arm robot equipped with visuotactile sensing, we demonstrate pick-and-place of 15 diverse objects with simPLE. The objects span a wide range of shapes and simPLE achieves successful placements into structured arrangements with 1mm clearance over 90% of the time for 6 objects, and over 80% of the time for 11 objects. Videos are available at http://mcube.mit.edu/research/simPLE.html .Comment: 33 pages, 6 figures, 2 tables, submitted to Science Robotic

    Nonprehensile Manipulation via Multisensory Learning from Demonstration

    Get PDF
    Dexterous manipulation problem concerns control of a robot hand to manipulate an object in a desired manner. While classical dexterous manipulation strategies are based on stable grasping (or force closure), many human-like manipulation tasks do not maintain grasp stability, and often utilize the intrinsic dynamics of the object rather than the closed form of kinematic relation between the object and the robotic fingers. Such manipulation strategies are referred as nonprehensile or dynamic dexterous manipulation in the literature. Nonprehensile manipulation typically involves fast and agile movements such as throwing and flipping. Due to the complexity of such motions (which may involve impulsive dynamics) and uncertainties associated with them, it has been challenging to realize nonprehensile manipulation tasks in a reliable way. In this paper, we propose a new control strategy to realize practical nonprehensile manipulation tasks using a robot hand. The main idea of our control strategy are two-folds. Firstly, we make explicit use of multiple modalities of sensory data for the design of control law. Specifically, force data is employed for feedforward control while the position data is used for feedback (i.e. reactive) control. Secondly, control signals (both feedback and feedforward) are obtained by the multisensory learning from demonstration (LfD) experiments which are designed and performed for specific nonprehensile manipulation tasks in concern. We utilize various LfD frameworks such as Gaussian mixture model and Gaussian mixture regression (GMM/GMR) and hidden Markov model and GMR (HMM/GMR) to reproduce generalized motion profiles from the human expert's demonstrations. The proposed control strategy has been verified by experimental results on dynamic spinning task using a sensory-rich two-finger robotic hand. The control performance (i.e. the speed and accuracy of the spinning task) has also been compared with that of the classical dexterous manipulation based on finger gating
    corecore