23 research outputs found
Improving grasping forces during the manipulation of unknown objects
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksMany of the solutions proposed for the object manipulation problem are based on the knowledge of the object features. The approach proposed in this paper intends to provide a simple geometrical approach to securely manipulate an unknown object based only on tactile and kinematic information. The tactile and kinematic data obtained during the manipulation is used to recognize the object shape (at least the local object curvature), allowing to improve the grasping forces when this information is added to the manipulation strategy.
The approach has been fully implemented and tested using the Schunk Dexterous Hand (SDH2). Experimental results are shown to illustrate the efficiency of the approach.Peer ReviewedPostprint (author's final draft
TactileGCN: A Graph Convolutional Network for Predicting Grasp Stability with Tactile Sensors
Tactile sensors provide useful contact data during the interaction with an
object which can be used to accurately learn to determine the stability of a
grasp. Most of the works in the literature represented tactile readings as
plain feature vectors or matrix-like tactile images, using them to train
machine learning models. In this work, we explore an alternative way of
exploiting tactile information to predict grasp stability by leveraging
graph-like representations of tactile data, which preserve the actual spatial
arrangement of the sensor's taxels and their locality. In experimentation, we
trained a Graph Neural Network to binary classify grasps as stable or slippery
ones. To train such network and prove its predictive capabilities for the
problem at hand, we captured a novel dataset of approximately 5000
three-fingered grasps across 41 objects for training and 1000 grasps with 10
unknown objects for testing. Our experiments prove that this novel approach can
be effectively used to predict grasp stability
More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
For humans, the process of grasping an object relies heavily on rich tactile
feedback. Most recent robotic grasping work, however, has been based only on
visual input, and thus cannot easily benefit from feedback after initiating
contact. In this paper, we investigate how a robot can learn to use tactile
information to iteratively and efficiently adjust its grasp. To this end, we
propose an end-to-end action-conditional model that learns regrasping policies
from raw visuo-tactile data. This model -- a deep, multimodal convolutional
network -- predicts the outcome of a candidate grasp adjustment, and then
executes a grasp by iteratively selecting the most promising actions. Our
approach requires neither calibration of the tactile sensors, nor any
analytical modeling of contact forces, thus reducing the engineering effort
required to obtain efficient grasping policies. We train our model with data
from about 6,450 grasping trials on a two-finger gripper equipped with GelSight
high-resolution tactile sensors on each finger. Across extensive experiments,
our approach outperforms a variety of baselines at (i) estimating grasp
adjustment outcomes, (ii) selecting efficient grasp adjustments for quick
grasping, and (iii) reducing the amount of force applied at the fingers, while
maintaining competitive performance. Finally, we study the choices made by our
model and show that it has successfully acquired useful and interpretable
grasping behaviors.Comment: 8 pages. Published on IEEE Robotics and Automation Letters (RAL).
Website: https://sites.google.com/view/more-than-a-feelin