89 research outputs found
GelSight360: An Omnidirectional Camera-Based Tactile Sensor for Dexterous Robotic Manipulation
Camera-based tactile sensors have shown great promise in enhancing a robot's
ability to perform a variety of dexterous manipulation tasks. Advantages of
their use can be attributed to the high resolution tactile data and 3D depth
map reconstructions they can provide. Unfortunately, many of these tactile
sensors use either a flat sensing surface, sense on only one side of the
sensor's body, or have a bulky form-factor, making it difficult to integrate
the sensors with a variety of robotic grippers. Of the camera-based sensors
that do have all-around, curved sensing surfaces, many cannot provide 3D depth
maps; those that do often require optical designs specified to a particular
sensor geometry. In this work, we introduce GelSight360, a fingertip-like,
omnidirectional, camera-based tactile sensor capable of producing depth maps of
objects deforming the sensor's surface. In addition, we introduce a novel
cross-LED lighting scheme that can be implemented in different all-around
sensor geometries and sizes, allowing the sensor to easily be reconfigured and
attached to different grippers of varying DOFs. With this work, we enable
roboticists to quickly and easily customize high resolution tactile sensors to
fit their robotic system's needs
Crack-tip deformation field measurements using coherent gradient sensing
A real time, full field, lateral shearing interferometry - coherent gradient sensing (CGS) - has recently been developed for investigating fracture in transparent and opaque solids. The resulting interference patterns are related to the mechanical fields by means of a first order diffraction analysis. The method has been successfully applied to quasi-static and dynamic crack tip deformation field mapping in homogeneous and bimaterial fracture specimens
Visual Dexterity: In-hand Dexterous Manipulation from Depth
In-hand object reorientation is necessary for performing many dexterous
manipulation tasks, such as tool use in unstructured environments that remain
beyond the reach of current robots. Prior works built reorientation systems
that assume one or many of the following specific circumstances: reorienting
only specific objects with simple shapes, limited range of reorientation, slow
or quasistatic manipulation, the need for specialized and costly sensor suites,
simulation-only results, and other constraints which make the system infeasible
for real-world deployment. We overcome these limitations and present a general
object reorientation controller that is trained using reinforcement learning in
simulation and evaluated in the real world. Our system uses readings from a
single commodity depth camera to dynamically reorient complex objects by any
amount in real time. The controller generalizes to novel objects not used
during training. It is successful in the most challenging test: the ability to
reorient objects in the air held by a downward-facing hand that must counteract
gravity during reorientation. The results demonstrate that the policy transfer
from simulation to the real world can be accomplished even for dynamic and
contact-rich tasks. Lastly, our hardware only uses open-source components that
cost less than five thousand dollars. Such construction makes it possible to
replicate the work and democratize future research in dexterous manipulation.
Videos are available at:
https://taochenshh.github.io/projects/visual-dexterity
- …