572 research outputs found

    Holdable Haptic Device for 4-DOF Motion Guidance

    Full text link
    Hand-held haptic devices can allow for greater freedom of motion and larger workspaces than traditional grounded haptic devices. They can also provide more compelling haptic sensations to the users' fingertips than many wearable haptic devices because reaction forces can be distributed over a larger area of skin far away from the stimulation site. This paper presents a hand-held kinesthetic gripper that provides guidance cues in four degrees of freedom (DOF). 2-DOF tangential forces on the thumb and index finger combine to create cues to translate or rotate the hand. We demonstrate the device's capabilities in a three-part user study. First, users moved their hands in response to haptic cues before receiving instruction or training. Then, they trained on cues in eight directions in a forced-choice task. Finally, they repeated the first part, now knowing what each cue intended to convey. Users were able to discriminate each cue over 90% of the time. Users moved correctly in response to the guidance cues both before and after the training and indicated that the cues were easy to follow. The results show promise for holdable kinesthetic devices in haptic feedback and guidance for applications such as virtual reality, medical training, and teleoperation.Comment: Submitted to IEEE World Haptics Conference 201

    Configuration and Fabrication of Preformed Vine Robots

    Full text link
    Vine robots are a class of soft continuum robots that grow via tip eversion, allowing them to move their tip without relying on reaction forces from the environment. Constructed from compliant materials such as fabric and thin, flexible plastic, these robots are able to grow many times their original length with the use of fluidic pressure. They can be mechanically programmed/preformed to follow a desired path during growth by changing the structure of their body prior to deployment. We present a model for fabricating preformed vine robots with discrete bends. We apply this model across combinations of three fabrication methods and two materials. One fabrication method, taping folds into the robot body, is from the literature. The other two methods, welding folds and connecting fasteners embedded in the robot body, are novel. Measurements show the ability of the resulting vine robots to follow a desired path and show that fabrication method has a significant impact. Results include bend angles with as little as 0.12 degrees of error, and segment lengths with as low as 0.36 mm of error. The required growth pressure and average growth speed of these preformed vine robots ranged from 11.5 to 23.7kPA and 3.75 to 10 cm/s, respectively. These results validate the use of preformed vine robots for deployment along known paths, and serve as a guide for choosing a fabrication method and material combination based on the specific needs of the task

    Quantifying perception of nonlinear elastic tissue models using multidimensional scaling

    Get PDF
    Simplified soft tissue models used in surgical simulations cannot perfectly reproduce all material behaviors. In particular, many tissues exhibit the Poynting effect, which results in normal forces during shearing of tissue and is only observed in nonlinear elastic material models. In order to investigate and quantify the role of the Poynting effect on material discrimination, we performed a multidimensional scaling (MDS) study. Participants were presented with several pairs of shear and normal forces generated by a haptic device during interaction with virtual soft objects. Participants were asked to rate the similarity between the forces felt. The selection of the material parameters – and thus the magnitude of the shear\ud and normal forces – was based on a pre-study prior to the MDS experiment. It was observed that for nonlinear elastic tissue models exhibiting the Poynting effect, MDS analysis indicated that both shear and normal forces affect user perception

    Observations and models for needle-tissue interactions

    Get PDF
    The asymmetry of a bevel-tip needle results in the needle naturally bending when it is inserted into soft tissue. In this study we present a mechanics-based model that calculates the deflection of the needle embedded in an elastic medium. Microscopic observations for several needle- gel interactions were used to characterize the interactions at the bevel tip and along the needle shaft. The model design was guided by microscopic observations of several needle- gel interactions. The energy-based model formulation incor- porates tissue-specific parameters such as rupture toughness, nonlinear material elasticity, and interaction stiffness, and needle geometric and material properties. Simulation results follow similar trends (deflection and radius of curvature) to those observed in macroscopic experimental studies of a robot- driven needle interacting with different kinds of gels. These results contribute to a mechanics-based model of robotic needle steering, extending previous work on kinematic models

    Toward Force Estimation in Robot-Assisted Surgery using Deep Learning with Vision and Robot State

    Full text link
    Knowledge of interaction forces during teleoperated robot-assisted surgery could be used to enable force feedback to human operators and evaluate tissue handling skill. However, direct force sensing at the end-effector is challenging because it requires biocompatible, sterilizable, and cost-effective sensors. Vision-based deep learning using convolutional neural networks is a promising approach for providing useful force estimates, though questions remain about generalization to new scenarios and real-time inference. We present a force estimation neural network that uses RGB images and robot state as inputs. Using a self-collected dataset, we compared the network to variants that included only a single input type, and evaluated how they generalized to new viewpoints, workspace positions, materials, and tools. We found that vision-based networks were sensitive to shifts in viewpoints, while state-only networks were robust to changes in workspace. The network with both state and vision inputs had the highest accuracy for an unseen tool, and was moderately robust to changes in viewpoints. Through feature removal studies, we found that using only position features produced better accuracy than using only force features as input. The network with both state and vision inputs outperformed a physics-based baseline model in accuracy. It showed comparable accuracy but faster computation times than a baseline recurrent neural network, making it better suited for real-time applications.Comment: 7 pages, 6 figures, submitted to ICRA 202

    Finite Element Modeling of Pneumatic Bending Actuators for Inflated-Beam Robots

    Full text link
    Inflated-beam soft robots, such as tip-everting vine robots, can control curvature by contracting one beam side via pneumatic actuation. This work develops a general finite element modeling approach to characterize their bending. The model is validated across four pneumatic actuator types (series, compression, embedded, and fabric pneumatic artificial muscles), and can be extended to other designs. These actuators employ two bending mechanisms: geometry-based contraction and material-based contraction. The model accounts for intricate nonlinear effects of buckling and anisotropy. Experimental validation includes three working pressures (10, 20, and 30 kPa) for each actuator type. Geometry-based contraction yields significant deformation (92.1% accuracy) once the buckling pattern forms, reducing slightly to 80.7% accuracy at lower pressures due to stress singularities during buckling. Material-based contraction achieves smaller bending angles but remains at least 96.7% accurate. The open source models available at http://www.vinerobots.org support designing inflated-beam robots like tip-everting vine robots, contributing to waste reduction by optimizing designs based on material properties and stress distribution for effective bending and stress management

    Effects of Visual and Proprioceptive Motion Feedback on Human Control of Targeted Movement

    Get PDF
    This research seeks to ascertain the relative value of visual and proprioceptive motion feedback during force-based control of a non-self entity like a powered prosthesis. Accurately controlling such a device is very difficult when the operator cannot see or feel the movement that results from applied forces. As an analogy to prosthesis use, we tested the relative importance of visual and proprioceptive motion feedback during targeted force-based movement. Thirteen human subjects performed a virtual finger-pointing task in which the virtual finger’s velocity was always programmed to be directly proportional to the MCP joint torque applied by the subject’s right index finger. During successive repetitions of the pointing task, the system conveyed the virtual finger’s motion to the user through four combinations of graphical display (vision) and finger movement (proprioception). Success rate, speed, and qualitative ease of use were recorded, and visual motion feedback was found to increase all three performance measures. Proprioceptive motion feedback significantly improved success rate and ease of use, but it yielded slower motions. The results indicate that proprioceptive motion feedback improves human control of targeted movement in both sighted and unsighted conditions, supporting the pursuit of artificial proprioception for prosthetics and underscoring the importance of motion feedback for other force-controlled human-machine systems, such as interactive virtual environments and teleoperators
    corecore