148 research outputs found
Recommended from our members
Data-driven Tactile Sensing using Spatially Overlapping Signals
Providing robots with distributed, robust and accurate tactile feedback is a fundamental problem in robotics because of the large number of tasks that require physical interaction with objects. Tactile sensors can provide robots with information about the location of each point of contact with the manipulated object, an estimation of the contact forces applied (normal and shear) and even slip detection. Despite significant advances in touch and force transduction, tactile sensing is still far from ubiquitous in robotic manipulation. Existing methods for building touch sensors have proven difficult to integrate into robot fingers due to multiple challenges, including difficulty in covering multicurved surfaces, high wire count, or packaging constrains preventing their use in dexterous hands.
In this dissertation, we focus on the development of soft tactile systems that can be deployed over complex, three-dimensional surfaces with a low wire count and using easily accessible manufacturing methods. To this effect, we present a general methodology called spatially overlapping signals. The key idea behind our method is to embed multiple sensing terminals in a volume of soft material which can be deployed over arbitrary, non-developable surfaces. Unlike a traditional taxel, these sensing terminals are not capable of measuring strain on their own. Instead, we take measurements across pairs of sensing terminals. Applying strain in the receptive field of this terminal pair should measurably affect the signal associated with it. As we embed multiple sensing terminals in this soft material, a significant overlap of these receptive fields occurs across the whole active sensing area, providing us with a very rich dataset characterizing the contact event. The use of an all-pairs approach, where all possible combinations of sensing terminals pairs are used, maximizes the number of signals extracted while reducing the total number of wires for the overall sensor, which in turn facilitates its integration.
Building an analytical model for how this rich signal set relates to various contacts events can be very challenging. Further, any such model would depend on knowing the exact locations of the terminals in the sensor, thus requiring very precise manufacturing. Instead, we build forward models of our sensors from data. We collect training data using a dataset of controlled indentations of known characteristics, directly learning the mapping between our signals and the variables characterizing a contact event. This approach allows for accessible, cheap manufacturing while enabling extensive coverage of curved surfaces. The concept of spatially overlapping signals can be realized using various transduction methods; we demonstrate sensors using piezoresistance, pressure transducers and optics. With piezoresistivity we measure resistance values across various electrodes embedded in a carbon nanotubes infused elastomer to determine the location of touch. Using commercially available pressure transducers embedded in various configurations inside a soft volume of rubber, we show its possible to localize contacts across a curved surface. Finally, using optics, we measure light transport between LEDs and photodiodes inside a clear elastomer which makes up our sensor. Our optical sensors are able to detect both the location and depth of an indentation very accurately on both planar and multicurved surfaces.
Our Distributed Interleaved Signals for Contact via Optics or D.I.S.C.O Finger is the culmination of this methodology: a fully integrated, sensorized robot finger, with a low wire count and designed for easy integration into dexterous manipulators. Our DISCO Finger can generally determine contact location with sub-millimeter accuracy, and contact force to within 10% (and often with 5%) of the true value without the need for analytical models. While our data-driven method requires training data representative of the final operational conditions that the system will encounter, we show our finger can be robust to novel contact scenarios where the shape of the indenter has not been seen during training. Moreover, the forward model that predicts contact locations and applied normal force can be transfered to new fingers with minimal loss of performance, eliminating the need to collect training data for each individual finger. We believe that rich tactile information, in a highly functional form with limited blind spots and a simple integration path into complete systems, like we demonstrate in this dissertation, will prove to be an important enabler for data-driven complex robotic motor skills, such as dexterous manipulation
A Framework for Tumor Localization in Robot-Assisted Minimally Invasive Surgery
Manual palpation of tissue is frequently used in open surgery, e.g., for localization of tumors and buried vessels and for tissue characterization. The overall objective of this work is to explore how tissue palpation can be performed in Robot-Assisted Minimally Invasive Surgery (RAMIS) using laparoscopic instruments conventionally used in RAMIS. This thesis presents a framework where a surgical tool is moved teleoperatively in a manner analogous to the repetitive pressing motion of a finger during manual palpation. We interpret the changes in parameters due to this motion such as the applied force and the resulting indentation depth to accurately determine the variation in tissue stiffness. This approach requires the sensorization of the laparoscopic tool for force sensing. In our work, we have used a da Vinci needle driver which has been sensorized in our lab at CSTAR for force sensing using Fiber Bragg Grating (FBG). A computer vision algorithm has been developed for 3D surgical tool-tip tracking using the da Vinci \u27s stereo endoscope. This enables us to measure changes in surface indentation resulting from pressing the needle driver on the tissue. The proposed palpation framework is based on the hypothesis that the indentation depth is inversely proportional to the tissue stiffness when a constant pressing force is applied. This was validated in a telemanipulated setup using the da Vinci surgical system with a phantom in which artificial tumors were embedded to represent areas of different stiffnesses. The region with high stiffness representing tumor and region with low stiffness representing healthy tissue showed an average indentation depth change of 5.19 mm and 10.09 mm respectively while maintaining a maximum force of 8N during robot-assisted palpation. These indentation depth variations were then distinguished using the k-means clustering algorithm to classify groups of low and high stiffnesses. The results were presented in a colour-coded map. The unique feature of this framework is its use of a conventional laparoscopic tool and minimal re-design of the existing da Vinci surgical setup. Additional work includes a vision-based algorithm for tracking the motion of the tissue surface such as that of the lung resulting from respiratory and cardiac motion. The extracted motion information was analyzed to characterize the lung tissue stiffness based on the lateral strain variations as the surface inflates and deflates
Learning to See Forces: Surgical Force Prediction with RGB-Point Cloud Temporal Convolutional Networks
Robotic surgery has been proven to offer clear advantages during surgical
procedures, however, one of the major limitations is obtaining haptic feedback.
Since it is often challenging to devise a hardware solution with accurate force
feedback, we propose the use of "visual cues" to infer forces from tissue
deformation. Endoscopic video is a passive sensor that is freely available, in
the sense that any minimally-invasive procedure already utilizes it. To this
end, we employ deep learning to infer forces from video as an attractive
low-cost and accurate alternative to typically complex and expensive hardware
solutions. First, we demonstrate our approach in a phantom setting using the da
Vinci Surgical System affixed with an OptoForce sensor. Second, we then
validate our method on an ex vivo liver organ. Our method results in a mean
absolute error of 0.814 N in the ex vivo study, suggesting that it may be a
promising alternative to hardware based surgical force feedback in endoscopic
procedures.Comment: MICCAI 2018 workshop, CARE(Computer Assisted and Robotic Endoscopy
- …