14 research outputs found

    Comparing Piezoresistive Substrates for Tactile Sensing in Dexterous Hands

    Full text link
    While tactile skins have been shown to be useful for detecting collisions between a robotic arm and its environment, they have not been extensively used for improving robotic grasping and in-hand manipulation. We propose a novel sensor design for use in covering existing multi-fingered robot hands. We analyze the performance of four different piezoresistive materials using both fabric and anti-static foam substrates in benchtop experiments. We find that although the piezoresistive foam was designed as packing material and not for use as a sensing substrate, it performs comparably with fabrics specifically designed for this purpose. While these results demonstrate the potential of piezoresistive foams for tactile sensing applications, they do not fully characterize the efficacy of these sensors for use in robot manipulation. As such, we use a high density foam substrate to develop a scalable tactile skin that can be attached to the palm of a robotic hand. We demonstrate several robotic manipulation tasks using this sensor to show its ability to reliably detect and localize contact, as well as analyze contact patterns during grasping and transport tasks.Comment: 10 figures, 8 pages, submitted to ICRA 202

    Under Pressure: Learning to Detect Slip with Barometric Tactile Sensors

    Full text link
    Despite the utility of tactile information, tactile sensors have yet to be widely deployed in industrial robotics settings -- part of the challenge lies in identifying slip and other key events from the tactile data stream. In this paper, we present a learning-based method to detect slip using barometric tactile sensors. Although these sensors have a low resolution, they have many other desirable properties including high reliability and durability, a very slim profile, and a low cost. We are able to achieve slip detection accuracies of greater than 91% while being robust to the speed and direction of the slip motion. Further, we test our detector on two robot manipulation tasks involving common household objects and demonstrate successful generalization to real-world scenarios not seen during training. We show that barometric tactile sensing technology, combined with data-driven learning, is potentially suitable for complex manipulation tasks such as slip compensation.Comment: Submitted to th RoboTac Workshop in the IEEE/RSJ International Conference on Intelligent Robotics and Systems (IROS'21), Prague, Czech Republic, Sept 27- Oct 1, 202

    SwingBot: Learning Physical Features from In-hand Tactile Exploration for Dynamic Swing-up Manipulation

    Full text link
    Several robot manipulation tasks are extremely sensitive to variations of the physical properties of the manipulated objects. One such task is manipulating objects by using gravity or arm accelerations, increasing the importance of mass, center of mass, and friction information. We present SwingBot, a robot that is able to learn the physical features of a held object through tactile exploration. Two exploration actions (tilting and shaking) provide the tactile information used to create a physical feature embedding space. With this embedding, SwingBot is able to predict the swing angle achieved by a robot performing dynamic swing-up manipulations on a previously unseen object. Using these predictions, it is able to search for the optimal control parameters for a desired swing-up angle. We show that with the learned physical features our end-to-end self-supervised learning pipeline is able to substantially improve the accuracy of swinging up unseen objects. We also show that objects with similar dynamics are closer to each other on the embedding space and that the embedding can be disentangled into values of specific physical properties.Comment: IROS 202

    Grasp Stability Assessment Through Attention-Guided Cross-Modality Fusion and Transfer Learning

    Full text link
    Extensive research has been conducted on assessing grasp stability, a crucial prerequisite for achieving optimal grasping strategies, including the minimum force grasping policy. However, existing works employ basic feature-level fusion techniques to combine visual and tactile modalities, resulting in the inadequate utilization of complementary information and the inability to model interactions between unimodal features. This work proposes an attention-guided cross-modality fusion architecture to comprehensively integrate visual and tactile features. This model mainly comprises convolutional neural networks (CNNs), self-attention, and cross-attention mechanisms. In addition, most existing methods collect datasets from real-world systems, which is time-consuming and high-cost, and the datasets collected are comparatively limited in size. This work establishes a robotic grasping system through physics simulation to collect a multimodal dataset. To address the sim-to-real transfer gap, we propose a migration strategy encompassing domain randomization and domain adaptation techniques. The experimental results demonstrate that the proposed fusion framework achieves markedly enhanced prediction performance (approximately 10%) compared to other baselines. Moreover, our findings suggest that the trained model can be reliably transferred to real robotic systems, indicating its potential to address real-world challenges.Comment: Accepted by IROS 202

    Encouraging and Detecting Preferential Incipient Slip for Use in Slip Prevention in Robot-Assisted Surgery

    Get PDF
    Robotic surgical platforms have helped to improve minimally invasive surgery; however, limitations in their force feedback and force control can result in undesirable tissue trauma or tissue slip events. In this paper, we investigate a sensing method for the early detection of slip events when grasping soft tissues, which would allow surgical robots to take mitigating action to prevent tissue slip and maintain stable grasp control while minimising the applied gripping force, reducing the probability of trauma. The developed sensing concept utilises a curved grasper face to create areas of high and low normal, and thus frictional, force. In the areas of low normal force, there is a higher probability that the grasper face will slip against the tissue. If the grasper face is separated into a series of independent movable islands, then by tracking their displacement it will be possible to identify when the areas of low normal force first start to slip while the remainder of the tissue is still held securely. The system was evaluated through the simulated grasping and retraction of tissue under conditions representative of surgical practice using silicone tissue simulants and porcine liver samples. It was able to successfully detect slip before gross slip occurred with a 100% and 77% success rate for the tissue simulant and porcine liver samples, respectively. This research demonstrates the efficacy of this sensing method and the associated sensor system for detecting the occurrence of tissue slip events during surgical grasping and retraction

    Grip Stabilization through Independent Finger Tactile Feedback Control

    Get PDF
    Grip force control during robotic in-hand manipulation is usually modeled as a monolithic task, where complex controllers consider the placement of all fingers and the contact states between each finger and the gripped object in order to compute the necessary forces to be applied by each finger. Such approaches normally rely on object and contact models and do not generalize well to novel manipulation tasks. Here, we propose a modular grip stabilization method based on a proposition that explains how humans achieve grasp stability. In this biomimetic approach, independent tactile grip stabilization controllers ensure that slip does not occur locally at the engaged robot fingers. Local slip is predicted from the tactile signals of each fingertip sensor i.e., BioTac and BioTac SP by Syntouch. We show that stable grasps emerge without any form of central communication when such independent controllers are engaged in the control of multi-digit robotic hands. The resulting grasps are resistant to external perturbations while ensuring stable grips on a wide variety of objects

    The Role of Stereopsis in the Control of Grasp Forces during Prehension

    Get PDF
    Background: Binocular viewing is associated with a superior prehensile performance, which is particularly evident in the latter part of the reach as the hand approaches and makes contact with the target object. However, the visuomotor mechanisms through which binocular vision serves prehensile performance remains unclear. The present study was designed to investigate the role of stereopsis in the planning and control of grasping using outcome measures which reflect predictive control. It was hypothesized that binocular viewing will be associated with more efficient grasp execution because stereoacuity provides more accurate sensory input about the object’s material properties to plan appropriate grip forces to successfully lift the target object. In the case when binocular vision is reduced or unavailable, predictive control of grasping will be reduced, and subjects will have to rely on somatosensory feedback to successfully execute the grasp. Methods: 20 healthy participants (17-35 years, 11 male) with normal vision were recruited. Subjects performed a precision reach-to-grasp task which required them to reach, grasp, and transport a bead (~2 cm in diameter) to a specified location. Subjects were instructed to perform the task as fast as possible in the following viewing conditions: binocular, monocular, and two conditions with reduced stereoacuity: 200 arcsec stereo, 800 arcsec stereo, which were randomized in blocks. Results: Binocular, compared to monocular viewing had a greater influence on the grasp phase compared to the reach and transport phase. Specifically, there was a 36% increase in post-contact time, 29% decrease in grip force 50ms following object grasp, and 30% increase in grasp errors. In contrast, parameters of the reach and transport phase only demonstrated a 3-8% reduction in performance. Grasp performance was similarly disrupted during binocular viewing with reduced stereoacuity whereby a reduction in stereoacuity was associated with a proportional reduction in grasp performance. Notably, grip force at the time of object lift-off was comparable between all viewing conditions. Conclusion: The results demonstrate that binocular viewing contributes significantly more to the performance of grasping relative to the reach and transport phase. In addition, the results suggest that stereopsis provides important sensory information which enables the central nervous system to engage in predictive control of grasp forces. When binocular disparity information is reduced or absent, subjects take on a more cautious approach to the grasp and make more errors (i.e., collisions followed by readjustments). Overall, findings from the current study indicate that stereopsis provides important sensory input for the predictive control of grasping, and a progressive reduction in stereopsis is associated with increased uncertainty which results in a greater reliance on somatosensory feedback control
    corecore