907 research outputs found

    Force/Torque Sensing for Soft Grippers using an External Camera

    Full text link
    Robotic manipulation can benefit from wrist-mounted force/torque (F/T) sensors, but conventional F/T sensors can be expensive, difficult to install, and damaged by high loads. We present Visual Force/Torque Sensing (VFTS), a method that visually estimates the 6-axis F/T measurement that would be reported by a conventional F/T sensor. In contrast to approaches that sense loads using internal cameras placed behind soft exterior surfaces, our approach uses an external camera with a fisheye lens that observes a soft gripper. VFTS includes a deep learning model that takes a single RGB image as input and outputs a 6-axis F/T estimate. We trained the model with sensor data collected while teleoperating a robot (Stretch RE1 from Hello Robot Inc.) to perform manipulation tasks. VFTS outperformed F/T estimates based on motor currents, generalized to a novel home environment, and supported three autonomous tasks relevant to healthcare: grasping a blanket, pulling a blanket over a manikin, and cleaning a manikin's limbs. VFTS also performed well with a manually operated pneumatic gripper. Overall, our results suggest that an external camera observing a soft gripper can perform useful visual force/torque sensing for a variety of manipulation tasks.Comment: Accepted for presentation at 2023 IEEE International Conference on Robotics and Automation (ICRA

    Visual Contact Pressure Estimation for Grippers in the Wild

    Full text link
    Sensing contact pressure applied by a gripper can benefit autonomous and teleoperated robotic manipulation, but adding tactile sensors to a gripper's surface can be difficult or impractical. If a gripper visibly deforms, contact pressure can be visually estimated using images from an external camera that observes the gripper. While researchers have demonstrated this capability in controlled laboratory settings, prior work has not addressed challenges associated with visual pressure estimation in the wild, where lighting, surfaces, and other factors vary widely. We present a model and associated methods that enable visual pressure estimation under widely varying conditions. Our model, Visual Pressure Estimation for Robots (ViPER), takes an image from an eye-in-hand camera as input and outputs an image representing the pressure applied by a soft gripper. Our key insight is that force/torque sensing can be used as a weak label to efficiently collect training data in settings where pressure measurements would be difficult to obtain. When trained on this weakly labeled data combined with fully labeled data that includes pressure measurements, ViPER outperforms prior methods, enables precision manipulation in cluttered settings, and provides accurate estimates for unseen conditions relevant to in-home use.Comment: Accepted for presentation at the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023

    ForceSight: Text-Guided Mobile Manipulation with Visual-Force Goals

    Full text link
    We present ForceSight, a system for text-guided mobile manipulation that predicts visual-force goals using a deep neural network. Given a single RGBD image combined with a text prompt, ForceSight determines a target end-effector pose in the camera frame (kinematic goal) and the associated forces (force goal). Together, these two components form a visual-force goal. Prior work has demonstrated that deep models outputting human-interpretable kinematic goals can enable dexterous manipulation by real robots. Forces are critical to manipulation, yet have typically been relegated to lower-level execution in these systems. When deployed on a mobile manipulator equipped with an eye-in-hand RGBD camera, ForceSight performed tasks such as precision grasps, drawer opening, and object handovers with an 81% success rate in unseen environments with object instances that differed significantly from the training data. In a separate experiment, relying exclusively on visual servoing and ignoring force goals dropped the success rate from 90% to 45%, demonstrating that force goals can significantly enhance performance. The appendix, videos, code, and trained models are available at https://force-sight.github.io/

    The acute effects of intracomplex rest intervals on rate of force development and ballistic performance responses following strength-power complex training in talent-identified adolescent rugby players

    Get PDF
    This study investigated the effects of a strength-power complex on subsequent ballistic activity (BA) performance responses across a profile of jumps in adolescent talent-identified rugby players. Rate of force development (RFD) and BA performance responses was recorded in 22 participants over four intracomplex rest intervals (ICRI) (15s, 30s, 45s, 60s) following a complex of 3 repetitions of back squat @80% 1RM and 7 countermovement jumps (CMJs) in a randomised, counterbalanced design. Within subjects, repeated measures ANOVAs were conducted on peak rate of force development (PRFD), time to peak rate of force development (TPRFD), peak force (PF), and time to a peak force (TPF). Confidence limits were set at ±90% and effect size across the sample (partial ɳ²) was calculated across P1-P4 for all jump profiles. No significant effects were observed across jump profiles or ICRI. The research confirms RFD and BA performance responses were maintained across all jump profiles and each ICRI. In contrast to previous research, the use of minimal ICRI of 15s, 30s, 45s and 60s following strength-power complex training is a practical time-efficient means of maintaining RFD and BA performance responses across jump profiles of seven jumps, which has important implications in practical coaching environments

    A Polymorphism in the α4 Nicotinic Receptor Gene (Chrna4) Modulates Enhancement of Nicotinic Receptor Function by Ethanol

    Full text link
    Several studies indicate that ethanol enhances the activity of α4β2 nicotinic acetylcholine receptors (nAChR). Our laboratory has identified a polymorphism in the α4 gene that results in the substitution of an alanine (A) for threonine (T) at amino acid position 529 in the second intracellular loop of the α4 protein. Mouse strains expressing the A variant have, in general, greater nAChR-mediated 86 Rb + efflux in response to nicotine than strains with the T variant. However, the possibility of the polymorphism modulating the effects of ethanol on the 86 Rb + efflux response has not been investigated. Methods : We have used the 86 Rb + efflux method to study the acute effects of ethanol on the function of the α4β2 nAChR in the thalamus in six different mouse strains. Experiments were also performed on tissue samples taken from F2 intercross animals. The F2 animals were derived from A/J mice crossed with a substrain of C57BL/6J mice that carried a null mutation for the gene encoding the β2 nAChR subunit. Results : In strains carrying the A polymorphism (A/J, AKR/J, C3H/Ibg), coapplication of ethanol (10–100 mM) with nicotine (0.03–300 μM) increased maximal ion flux when compared with nicotine alone with no effect on agonist potency. In contrast, ethanol had little effect on the nicotine concentration-response curve in tissue prepared from strains carrying the T polymorphism (Balb/Ibg, C57BL/6J, C58/J). Experiments with the F2 hybrids demonstrated that one copy of the A polymorphism was sufficient to produce a significant enhancement of nAChR function by ethanol (50 mM) in animals that were also β2 +/+. Ethanol had no effect on nicotine concentration-response curves in T/T β2 +/+ animals. Conclusions : The results suggest that the A/T polymorphism influences the initial sensitivity of the α4β2 nAChR to ethanol.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/65432/1/01.ALC.0000067973.41153.BC.pd

    Visual Estimation of Fingertip Pressure on Diverse Surfaces using Easily Captured Data

    Full text link
    People often use their hands to make contact with the world and apply pressure. Machine perception of this important human activity could be widely applied. Prior research has shown that deep models can estimate hand pressure based on a single RGB image. Yet, evaluations have been limited to controlled settings, since performance relies on training data with high-resolution pressure measurements that are difficult to obtain. We present a novel approach that enables diverse data to be captured with only an RGB camera and a cooperative participant. Our key insight is that people can be prompted to perform actions that correspond with categorical labels describing contact pressure (contact labels), and that the resulting weakly labeled data can be used to train models that perform well under varied conditions. We demonstrate the effectiveness of our approach by training on a novel dataset with 51 participants making fingertip contact with instrumented and uninstrumented objects. Our network, ContactLabelNet, dramatically outperforms prior work, performs well under diverse conditions, and matched or exceeded the performance of human annotators
    • …
    corecore