1,077 research outputs found

    Robotic simulators for tissue examination training with multimodal sensory feedback

    Get PDF
    Tissue examination by hand remains an essential technique in clinical practice. The effective application depends on skills in sensorimotor coordination, mainly involving haptic, visual, and auditory feedback. The skills clinicians have to learn can be as subtle as regulating finger pressure with breathing, choosing palpation action, monitoring involuntary facial and vocal expressions in response to palpation, and using pain expressions both as a source of information and as a constraint on physical examination. Patient simulators can provide a safe learning platform to novice physicians before trying real patients. This paper reviews state-of-the-art medical simulators for the training for the first time with a consideration of providing multimodal feedback to learn as many manual examination techniques as possible. The study summarizes current advances in tissue examination training devices simulating different medical conditions and providing different types of feedback modalities. Opportunities with the development of pain expression, tissue modeling, actuation, and sensing are also analyzed to support the future design of effective tissue examination simulators

    An Abdominal Phantom with Tunable Stiffness Nodules and Force Sensing Capability for Palpation Training

    Get PDF
    Robotic phantoms enable advanced physical examination training before using human patients. In this paper, we present an abdominal phantom for palpation training with controllable stiffness liver nodules that can also sense palpation forces. The coupled sensing and actuation approach is achieved by pneumatic control of positive-granular jammed nodules for tunable stiffness. Soft sensing is done using the variation of internal pressure of the nodules under external forces. This paper makes original contributions to extend the linear region of the neo-Hookean characteristic of the mechanical behavior of the nodules by 140% compared to no-jamming conditions and to propose a method using the organ level controllable nodules as sensors to estimate palpation position and force with a root-means-quare error (RMSE) of 4% and 6.5%, respectively. Compared to conventional soft sensors, the method allows the phantom to sense with no interference to the simulated physiological conditions when providing quantified feedback to trainees, and to enable training following current bare-hand examination protocols without the need to wear data gloves to collect data.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) MOTION grant EP/N03211X/2 and EP/N03208X/1, and EPSRC RoboPatient grant EP/T00603X/

    VISIO-HAPTIC DEFORMABLE MODEL FOR HAPTIC DOMINANT PALPATION SIMULATOR

    Get PDF
    Vision and haptic are two most important modalities in a medical simulation. While visual cues assist one to see his actions when performing a medical procedure, haptic cues enable feeling the object being manipulated during the interaction. Despite their importance in a computer simulation, the combination of both modalities has not been adequately assessed, especially that in a haptic dominant environment. Thus, resulting in poor emphasis in resource allocation management in terms of effort spent in rendering the two modalities for simulators with realistic real-time interactions. Addressing this problem requires an investigation on whether a single modality (haptic) or a combination of both visual and haptic could be better for learning skills in a haptic dominant environment such as in a palpation simulator. However, before such an investigation could take place one main technical implementation issue in visio-haptic rendering needs to be addresse

    Haptics-Enabled Teleoperation for Robotics-Assisted Minimally Invasive Surgery

    Get PDF
    The lack of force feedback (haptics) in robotic surgery can be considered to be a safety risk leading to accidental tissue damage and puncturing of blood vessels due to excessive forces being applied to tissue and vessels or causing inefficient control over the instruments because of insufficient applied force. This project focuses on providing a satisfactory solution for introducing haptic feedback in robotics-assisted minimally invasive surgical (RAMIS) systems. The research addresses several key issues associated with the incorporation of haptics in a master-slave (teleoperated) robotic environment for minimally invasive surgery (MIS). In this project, we designed a haptics-enabled dual-arm (two masters - two slaves) robotic MIS testbed to investigate and validate various single-arm as well as dual-arm teleoperation scenarios. The most important feature of this setup is the capability of providing haptic feedback in all 7 degrees of freedom (DOF) required for RAMIS (3 translations, 3 rotations and pinch motion of the laparoscopic tool). The setup also enables the evaluation of the effect of replacing haptic feedback by other sensory cues such as visual representation of haptic information (sensory substitution) and the hypothesis that surgical outcomes may be improved by substituting or augmenting haptic feedback by such sensory cues

    Research on real-time physics-based deformation for haptic-enabled medical simulation

    Full text link
    This study developed a multiple effective visuo-haptic surgical engine to handle a variety of surgical manipulations in real-time. Soft tissue models are based on biomechanical experiment and continuum mechanics for greater accuracy. Such models will increase the realism of future training systems and the VR/AR/MR implementations for the operating room

    Using Parametric CAD and FDM to Produce High Fidelity Anatomical Structures for Thoracentesis Training

    Get PDF
    Currently available thoracentesis medical training simulators lack tactile realism and do not represent the physiological variations in patient characteristics, impeding optimal experiential learning. By systematically implementing advanced computer-aided design (CAD) techniques and additive manufacturing (AM) tools, with a flexible design methodology, thoracic wall representations for a 2-year-old male, an 18-year-old female, and a 30-year-old male, with complete skeletal structures necessary for palpation sequencing were modelled. Models for the 2-year-old male and 18-year-old female were fabricated, complete with realistic tissues that accurately represent the various discrete tissue layers of the human thoracic cross section. Clavicular growth rates were used to develop factors with which to scale the skeletal models to represent a range of patient demographics. Parametrically modelled mould sets enable the modification of tissue thickness to account for varying thoracic wall thicknesses observed in the thoracentesis demographic. Through the implementation of scaling factors based on skeletal growth rates from the literature to represent different patient groups, clavicle sizing accuracy ranging from 0.4%-1.3% was achieved, and intercostal space measurement accuracy of 0.7%-2.8% was achieved as compared to target values from the literature. Improvements to simulated tissue were observed, with a 28.54% improvement in terms of peak force, 20.17% for impulse, and 36.31% for pulse width, when compared to the THM-30, a currently available popular model

    An error estimator for real-time simulators based on model order reduction

    Get PDF
    Model order reduction (MOR) is one of the most appealing choices for real-time simulation of non-linear solids. In this work a method is presented in which real time performance is achieved by means of the o-line solution of a (high dimensional) parametric problem that provides a sort of response surface or computational vademecum. This solution is then evaluated in real-time at feedback rates compatible with haptic devices, for instance (i.e., more than 1kHz). This high dimensional problem can be solved without the limitations imposed by the curse of dimensionality by employing Proper Generalized Decomposition (PGD) methods. Essentially, PGD assumes a separated representation for the essential eld of the problem. Here, an error estimator is proposed for this type of solutions that takes into account the non-linear character of the studied problems. This error estimator allows to compute the necessary number of modes employed to obtain an approximation to the solution within a prescribed error tolerance in a given quantity of interest

    Facial Expression Rendering in Medical Training Simulators: Current Status and Future Directions

    Get PDF
    Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This paper reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also summarized to incorporate the knowledge from those works into developing systems for medical training simulators. Classifications and comparisons of medical training simulators with facial expression rendering are presented, and important design features, merits and limitations are outlined. Medical educators, students and developers are identified as the three key stakeholders involved with these systems and their considerations and needs are presented. Physical-virtual (hybrid) approaches provide multimodal feedback, present accurate facial expression rendering, and can simulate patients of different age, gender and ethnicity group; makes it more versatile than virtual and physical systems. The overall findings of this review and proposed future directions are beneficial to researchers interested in initiating or developing such facial expression rendering systems in medical training simulators.This work was supported by the Robopatient project funded by the EPSRC Grant No EP/T00519X/

    Emerging Challenges in Technology-based Support for Surgical Training

    Get PDF
    This paper stipulates several technological research and development thrusts that can assist in modern day approaches to simulated training of minimally invasive laparoscopic and robot surgery. Basic tenets of such training are explained, and specific areas of research are enumerated. Specifically, augmented and mixed reality are proposed as a means of improving perceptual and clinical decision-making skills, haptics are proposed as mechanism not only to provide force feedback and guidance, but also as a means of reflecting a tactile feel of surgery in simulated training scenarios. Learning optimization is discussed to fine tune the difficulty levels of various exercises. All the above elements can serve as the foundation for building computer-based virtual coaching environments that can reduce the training costs and provide a broader access to learning highly complex, technology driven surgical techniques

    A surrogate model based on a finite element model of abdomen for real-time visualisation of tissue stress during physical examination training

    Get PDF
    Robotic patients show great potential to improve medical palpation training as they can provide feedback that cannot be obtained in a real patient. Providing information about internal organs deformation can significantly enhance palpation training by giving medical trainees visual insight based on their finger behaviours. This can be achieved by using computational models of abdomen mechanics. However, such models are computationally expensive, thus able to provide real-time predictions. In this work, we proposed an innovative surrogate model of abdomen mechanics using machine learning (ML) and finite element (FE) modelling to virtually render internal tissue deformation in real-time. We first developed a new high-fidelity FE model of the abdomen mechanics from computerized tomography (CT) images. We performed palpation simulations to produce a large database of stress distribution on the liver edge, an area of interest in most examinations. We then used artificial neural networks (ANN) to develop the surrogate model and demonstrated its application in an experimental palpation platform. Our FE simulations took 1.5 hrs to predict stress distribution for each palpation while this only took a fraction of a second for the surrogate model. Our results show that the ANN has a 92.6% accuracy. We also show that the surrogate model is able to use the experimental input of palpation location and force to provide real-time projections onto the robotics platform. This enhanced robotics platform has potential to be used as a training simulator for trainees to hone their palpation skills
    • …
    corecore