38 research outputs found
Localization and Manipulation of Small Parts Using GelSight Tactile Sensing
Robust manipulation and insertion of small parts can be challenging because of the small tolerances typically involved. The key to robust control of these kinds of manipulation interactions is accurate tracking and control of the parts involved. Typically, this is accomplished using visual servoing or force-based control. However, these approaches have drawbacks. Instead, we propose a new approach that uses tactile sensing to accurately localize the pose of a part grasped in the robot hand. Using a feature-based matching technique in conjunction with a newly developed tactile sensing technology known as GelSight that has much higher resolution than competing methods, we synthesize high-resolution height maps of object surfaces. As a result of these high-resolution tactile maps, we are able to localize small parts held in a robot hand very accurately. We quantify localization accuracy in benchtop experiments and experimentally demonstrate the practicality of the approach in the context of a small parts insertion problem.National Science Foundation (U.S.) (NSF Grant No. 1017862)United States. National Aeronautics and Space Administration (NASA under Grant No. NNX13AQ85G)United States. Office of Naval Research (ONR Grant No. N000141410047
HySenSe: A Hyper-Sensitive and High-Fidelity Vision-Based Tactile Sensor
In this paper, to address the sensitivity and durability trade-off of
Vision-based Tactile Sensor (VTSs), we introduce a hyper-sensitive and
high-fidelity VTS called HySenSe. We demonstrate that by solely changing one
step during the fabrication of the gel layer of the GelSight sensor (as the
most well-known VTS), we can substantially improve its sensitivity and
durability. Our experimental results clearly demonstrate the outperformance of
the HySenSe compared with a similar GelSight sensor in detecting textural
details of various objects under identical experimental conditions and low
interaction forces (<= 1.5 N).Comment: Accepted to IEEE Sensors 2022 Conferenc
Tactile Mapping and Localization from High-Resolution Tactile Imprints
This work studies the problem of shape reconstruction and object localization
using a vision-based tactile sensor, GelSlim. The main contributions are the
recovery of local shapes from contact, an approach to reconstruct the tactile
shape of objects from tactile imprints, and an accurate method for object
localization of previously reconstructed objects. The algorithms can be applied
to a large variety of 3D objects and provide accurate tactile feedback for
in-hand manipulation. Results show that by exploiting the dense tactile
information we can reconstruct the shape of objects with high accuracy and do
on-line object identification and localization, opening the door to reactive
manipulation guided by tactile sensing. We provide videos and supplemental
information in the project's website
http://web.mit.edu/mcube/research/tactile_localization.html.Comment: ICRA 2019, 7 pages, 7 figures. Website:
http://web.mit.edu/mcube/research/tactile_localization.html Video:
https://youtu.be/uMkspjmDbq
GelSlim: A High-Resolution, Compact, Robust, and Calibrated Tactile-sensing Finger
This work describes the development of a high-resolution tactile-sensing
finger for robot grasping. This finger, inspired by previous GelSight sensing
techniques, features an integration that is slimmer, more robust, and with more
homogeneous output than previous vision-based tactile sensors. To achieve a
compact integration, we redesign the optical path from illumination source to
camera by combining light guides and an arrangement of mirror reflections. We
parameterize the optical path with geometric design variables and describe the
tradeoffs between the finger thickness, the depth of field of the camera, and
the size of the tactile sensing area. The sensor sustains the wear from
continuous use -- and abuse -- in grasping tasks by combining tougher materials
for the compliant soft gel, a textured fabric skin, a structurally rigid body,
and a calibration process that maintains homogeneous illumination and contrast
of the tactile images during use. Finally, we evaluate the sensor's durability
along four metrics that track the signal quality during more than 3000 grasping
experiments.Comment: RA-L Pre-print. 8 page
Shear-invariant Sliding Contact Perception with a Soft Tactile Sensor
Manipulation tasks often require robots to be continuously in contact with an
object. Therefore tactile perception systems need to handle continuous contact
data. Shear deformation causes the tactile sensor to output path-dependent
readings in contrast to discrete contact readings. As such, in some
continuous-contact tasks, sliding can be regarded as a disturbance over the
sensor signal. Here we present a shear-invariant perception method based on
principal component analysis (PCA) which outputs the required information about
the environment despite sliding motion. A compliant tactile sensor (the TacTip)
is used to investigate continuous tactile contact. First, we evaluate the
method offline using test data collected whilst the sensor slides over an edge.
Then, the method is used within a contour-following task applied to 6 objects
with varying curvatures; all contours are successfully traced. The method
demonstrates generalisation capabilities and could underlie a more
sophisticated controller for challenging manipulation or exploration tasks in
unstructured environments. A video showing the work described in the paper can
be found at https://youtu.be/wrTM61-pieUComment: Accepted in ICRA 201
Connecting Look and Feel: Associating the visual and tactile properties of physical materials
For machines to interact with the physical world, they must understand the
physical properties of objects and materials they encounter. We use fabrics as
an example of a deformable material with a rich set of mechanical properties. A
thin flexible fabric, when draped, tends to look different from a heavy stiff
fabric. It also feels different when touched. Using a collection of 118 fabric
sample, we captured color and depth images of draped fabrics along with tactile
data from a high resolution touch sensor. We then sought to associate the
information from vision and touch by jointly training CNNs across the three
modalities. Through the CNN, each input, regardless of the modality, generates
an embedding vector that records the fabric's physical property. By comparing
the embeddings, our system is able to look at a fabric image and predict how it
will feel, and vice versa. We also show that a system jointly trained on vision
and touch data can outperform a similar system trained only on visual data when
tested purely with visual inputs