9 research outputs found
Minsight: A Fingertip-Sized Vision-Based Tactile Sensor for Robotic Manipulation
Intelligent interaction with the physical world requires perceptual abilities
beyond vision and hearing; vibrant tactile sensing is essential for autonomous
robots to dexterously manipulate unfamiliar objects or safely contact humans.
Therefore, robotic manipulators need high-resolution touch sensors that are
compact, robust, inexpensive, and efficient. The soft vision-based haptic
sensor presented herein is a miniaturized and optimized version of the
previously published sensor Insight. Minsight has the size and shape of a human
fingertip and uses machine learning methods to output high-resolution maps of
3D contact force vectors at 60 Hz. Experiments confirm its excellent sensing
performance, with a mean absolute force error of 0.07 N and contact location
error of 0.6 mm across its surface area. Minsight's utility is shown in two
robotic tasks on a 3-DoF manipulator. First, closed-loop force control enables
the robot to track the movements of a human finger based only on tactile data.
Second, the informative value of the sensor output is shown by detecting
whether a hard lump is embedded within a soft elastomer with an accuracy of
98%. These findings indicate that Minsight can give robots the detailed
fingertip touch sensing needed for dexterous manipulation and physical
human-robot interaction
A bio-inspired multi-functional tendon-driven tactile sensor and application in obstacle avoidance using reinforcement learning
This paper presents a new bio-inspired tactile sensor that is multi-functional and has different sensitivity contact areas. The TacTop area is sensitive and is used for object classification when there is a direct contact. On the other hand, the TacSide area is less sensitive and is used to localize the side contact areas. By connecting tendons from the TacSide area to the TacTop area, the sensor is able to perform multiple detection functions using the same expression region. For the mixed contacting signals collected from the expression region with numerous markers and pins, we build a modified DenseNet121 network which specifically removes all fully connected layers and keeps the rest as a sub-network. The proposed model also contains a global average pooling layer with two branching networks to handle different functions and provide accurate spatial translation of the extracted features. The experimental results demonstrate a high prediction accuracy of 98% for object perception and localization. Furthermore, the new tactile sensor is utilized for obstacle avoidance, where action skills are extracted from human demonstrations and then an action dataset is generated for reinforcement learning to guide robots towards correct responses after contact detection. To evaluate the effectiveness of the proposed framework, several simulations are performed in the MuJoCo environment
Dense Tactile Force Estimation using GelSlim and inverse FEM
In this paper, we present a new version of tactile sensor GelSlim 2.0 with the capability to estimate the contact force distribution in real time. The sensor is vision-based and uses an array of markers to track deformations on a gel pad due to contact. A new hardware design makes the sensor more rugged, parametrically adjusTable AND Improves illumination. leveraging the sensor's increased functionality, we propose to use inverse finite element method (ifem), a numerical method to reconstruct the contact force distribution based on marker displacements. the sensor is able to provide force distribution of contact with high spatial density. experiments and comparison with ground truth show that the reconstructed force distribution is physically reasonable with good accuracy.A sequence of Kendama manipulations with corresponding displacement field (yellow) and force field (red). Video can be found on Youtube: https://youtu.be/hWw9A0ZBZuU
Recommended from our members
Data-driven Tactile Sensing using Spatially Overlapping Signals
Providing robots with distributed, robust and accurate tactile feedback is a fundamental problem in robotics because of the large number of tasks that require physical interaction with objects. Tactile sensors can provide robots with information about the location of each point of contact with the manipulated object, an estimation of the contact forces applied (normal and shear) and even slip detection. Despite significant advances in touch and force transduction, tactile sensing is still far from ubiquitous in robotic manipulation. Existing methods for building touch sensors have proven difficult to integrate into robot fingers due to multiple challenges, including difficulty in covering multicurved surfaces, high wire count, or packaging constrains preventing their use in dexterous hands.
In this dissertation, we focus on the development of soft tactile systems that can be deployed over complex, three-dimensional surfaces with a low wire count and using easily accessible manufacturing methods. To this effect, we present a general methodology called spatially overlapping signals. The key idea behind our method is to embed multiple sensing terminals in a volume of soft material which can be deployed over arbitrary, non-developable surfaces. Unlike a traditional taxel, these sensing terminals are not capable of measuring strain on their own. Instead, we take measurements across pairs of sensing terminals. Applying strain in the receptive field of this terminal pair should measurably affect the signal associated with it. As we embed multiple sensing terminals in this soft material, a significant overlap of these receptive fields occurs across the whole active sensing area, providing us with a very rich dataset characterizing the contact event. The use of an all-pairs approach, where all possible combinations of sensing terminals pairs are used, maximizes the number of signals extracted while reducing the total number of wires for the overall sensor, which in turn facilitates its integration.
Building an analytical model for how this rich signal set relates to various contacts events can be very challenging. Further, any such model would depend on knowing the exact locations of the terminals in the sensor, thus requiring very precise manufacturing. Instead, we build forward models of our sensors from data. We collect training data using a dataset of controlled indentations of known characteristics, directly learning the mapping between our signals and the variables characterizing a contact event. This approach allows for accessible, cheap manufacturing while enabling extensive coverage of curved surfaces. The concept of spatially overlapping signals can be realized using various transduction methods; we demonstrate sensors using piezoresistance, pressure transducers and optics. With piezoresistivity we measure resistance values across various electrodes embedded in a carbon nanotubes infused elastomer to determine the location of touch. Using commercially available pressure transducers embedded in various configurations inside a soft volume of rubber, we show its possible to localize contacts across a curved surface. Finally, using optics, we measure light transport between LEDs and photodiodes inside a clear elastomer which makes up our sensor. Our optical sensors are able to detect both the location and depth of an indentation very accurately on both planar and multicurved surfaces.
Our Distributed Interleaved Signals for Contact via Optics or D.I.S.C.O Finger is the culmination of this methodology: a fully integrated, sensorized robot finger, with a low wire count and designed for easy integration into dexterous manipulators. Our DISCO Finger can generally determine contact location with sub-millimeter accuracy, and contact force to within 10% (and often with 5%) of the true value without the need for analytical models. While our data-driven method requires training data representative of the final operational conditions that the system will encounter, we show our finger can be robust to novel contact scenarios where the shape of the indenter has not been seen during training. Moreover, the forward model that predicts contact locations and applied normal force can be transfered to new fingers with minimal loss of performance, eliminating the need to collect training data for each individual finger. We believe that rich tactile information, in a highly functional form with limited blind spots and a simple integration path into complete systems, like we demonstrate in this dissertation, will prove to be an important enabler for data-driven complex robotic motor skills, such as dexterous manipulation