403 research outputs found

    Shape Localization and Recognition using a Magnetorheological-fluid Haptic Display

    Get PDF
    Smart materials such as magnetorheological fluids (MRF) offer an interesting technology for use in haptic displays as changes in the magnetic field are rapid, reversible, and controllable. These interfaces have been evaluated in a number of medical and surgical simulators where they can provide cues regarding the viscoelastic properties of tissues. The objective of the present set of experiments was first to determine whether a shape embedded in the MRF could be precisely localized and second whether 10 shapes rendered in a MRF haptic display could be accurately identified. It was also of interest to determine how the information transfer associated with this type of haptic display compares to that achieved using other haptic channels of communication. The overall performance of participants at identifying the shapes rendered in the MRF was good with a mean score of 73 percent correct and an Information Transfer (IT) of 2.2 bits. Participants could also localize a rigid object in the display accurately. These findings indicate that this technology has potential for use in training manual palpation skills and in exploring haptic shape perception in dynamic environments

    Robotic simulators for tissue examination training with multimodal sensory feedback

    Get PDF
    Tissue examination by hand remains an essential technique in clinical practice. The effective application depends on skills in sensorimotor coordination, mainly involving haptic, visual, and auditory feedback. The skills clinicians have to learn can be as subtle as regulating finger pressure with breathing, choosing palpation action, monitoring involuntary facial and vocal expressions in response to palpation, and using pain expressions both as a source of information and as a constraint on physical examination. Patient simulators can provide a safe learning platform to novice physicians before trying real patients. This paper reviews state-of-the-art medical simulators for the training for the first time with a consideration of providing multimodal feedback to learn as many manual examination techniques as possible. The study summarizes current advances in tissue examination training devices simulating different medical conditions and providing different types of feedback modalities. Opportunities with the development of pain expression, tissue modeling, actuation, and sensing are also analyzed to support the future design of effective tissue examination simulators

    HAPTIC AND VISUAL SIMULATION OF BONE DISSECTION

    Get PDF
    Marco AgusIn bone dissection virtual simulation, force restitution represents the key to realistically mimicking a patient– specific operating environment. The force is rendered using haptic devices controlled by parametrized mathematical models that represent the bone–burr contact. This dissertation presents and discusses a haptic simulation of a bone cutting burr, that it is being developed as a component of a training system for temporal bone surgery. A physically based model was used to describe the burr– bone interaction, including haptic forces evaluation, bone erosion process and resulting debris. The model was experimentally validated and calibrated by employing a custom experimental set–up consisting of a force–controlled robot arm holding a high–speed rotating tool and a contact force measuring apparatus. Psychophysical testing was also carried out to assess individual reaction to the haptic environment. The results suggest that the simulator is capable of rendering the basic material differences required for bone burring tasks. The current implementation, directly operating on a voxel discretization of patientspecific 3D CT and MR imaging data, is efficient enough to provide real–time haptic and visual feedback on a low–end multi–processing PC platform.

    Development and Validation of a Hybrid Virtual/Physical Nuss Procedure Surgical Trainer

    Get PDF
    With continuous advancements and adoption of minimally invasive surgery, proficiency with nontrivial surgical skills involved is becoming a greater concern. Consequently, the use of surgical simulation has been increasingly embraced by many for training and skill transfer purposes. Some systems utilize haptic feedback within a high-fidelity anatomically-correct virtual environment whereas others use manikins, synthetic components, or box trainers to mimic primary components of a corresponding procedure. Surgical simulation development for some minimally invasive procedures is still, however, suboptimal or otherwise embryonic. This is true for the Nuss procedure, which is a minimally invasive surgery for correcting pectus excavatum (PE) – a congenital chest wall deformity. This work aims to address this gap by exploring the challenges of developing both a purely virtual and a purely physical simulation platform of the Nuss procedure and their implications in a training context. This work then describes the development of a hybrid mixed-reality system that integrates virtual and physical constituents as well as an augmentation of the haptic interface, to carry out a reproduction of the primary steps of the Nuss procedure and satisfy clinically relevant prerequisites for its training platform. Furthermore, this work carries out a user study to investigate the system’s face, content, and construct validity to establish its faithfulness as a training platform

    Machine learning and interactive real-time simulation for training on relevant total hip replacement skills.

    Get PDF
    Virtual Reality simulators have proven to be an excellent tool in the medical sector to help trainees mastering surgical abilities by providing them with unlimited training opportunities. Total Hip Replacement (THR) is a procedure that can benefit significantly from VR/AR training, given its non-reversible nature. From all the different steps required while performing a THR, doctors agree that a correct fitting of the acetabular component of the implant has the highest relevance to ensure successful outcomes. Acetabular reaming is the step during which the acetabulum is resurfaced and prepared to receive the acetabular implant. The success of this step is directly related to the success of fitting the acetabular component. Therefore, this thesis will focus on developing digital tools that can be used to assist the training of acetabular reaming. Devices such as navigation systems and robotic arms have proven to improve the final accuracy of the procedure. However, surgeons must learn to adapt their instrument movements to be recognised by infrared cameras. When surgeons are initially introduced to these systems, surgical times can be extended up to 20 minutes, maximising surgical risks. Training opportunities are sparse, given the high investment required to purchase these devices. As a cheaper alternative, we developed an Augmented Reality (AR) alternative for training on the calibration of imageless navigation systems (INS). At the time, there were no alternative simulators that using head-mounted displays to train users into the steps to calibrate such systems. Our simulator replicates the presence of an infrared camera and its interaction with the reflecting markers located on the surgical tools. A group of 6 hip surgeons were invited to test the simulator. All of them expressed their satisfaction with the ease of use and attractiveness of the simulator as well as the similarity of interaction with the real procedure. The study confirmed that our simulator represents a cheaper and faster option to train multiple surgeons simultaneously in the use of Imageless Navigation Systems (INS) than learning exclusively on the surgical theatre. Current reviews on simulators for orthopaedic surgical procedures lack objective metrics of assessment given a standard set of design requirements. Instead, most of them rely exclusively on the level of interaction and functionality provided. We propose a comparative assessment rubric based on three different evaluation criteria. Namely immersion, interaction fidelity, and applied learning theories. After our assessment, we found that none of the simulators available for THR provides an accurate interactive representation of resurfacing procedures such as acetabular reaming based on force inputs exerted by the user. This feature is indispensable for an orthopaedics simulator, given that hand-eye coordination skills are essential skills to be trained before performing non-reversible bone removal on real patients. Based on the findings of our comparative assessment, we decided to develop a model to simulate the physically-based deformation expected during traditional acetabular reaming, given the user’s interaction with a volumetric mesh. Current interactive deformation methods on high-resolution meshes are based on geometrical collision detection and do not consider the contribution of the materials’ physical properties. By ignoring the effect of the material mechanics and the force exerted by the user, they become inadequate for training on hand- eye coordination skills transferable to the surgical theatre. Volumetric meshes are preferred in surgical simulation to geometric ones, given that they are able to represent the internal evolution of deformable solids resulting from cutting and shearing operations. Existing numerical methods for representing linear and corotational FEM cuts can only maintain interactive framerates at a low resolution of the mesh. Therefore, we decided to train a machine-learning model to learn the continuum mechanic laws relevant to acetabular reaming and predict deformations at interactive framerates. To the best of our knowledge, no research has been done previously on training a machine learning model on non-elastic FEM data to achieve results at interactive framerates. As training data, we used the results from XFEM simulations precomputed over 5000 frames for plastic deformations on tetrahedral meshes with 20406 elements each. We selected XFEM simulation as the physically-based deformation ground-truth given its accuracy and fast convergence to represent cuts, discontinuities and large strain rates. Our machine learning-based interactive model was trained following the Graph Neural Networks (GNN) blocks. GNNs were selected to learn on tetrahedral meshes as other supervised-learning architectures like the Multilayer perceptron (MLP), and Convolutional neural networks (CNN) are unable to learn the relationships between entities with an arbitrary number of neighbours. The learned simulator identifies the elements to be removed on each frame and describes the accumulated stress evolution in the whole machined piece. Using data generated from the results of XFEM allowed us to embed the effects of non-linearities in our interactive simulations without extra processing time. The trained model executed the prediction task using our tetrahedral mesh and unseen reamer orientations faster per frame than the time required to generate the training FEM dataset. Given an unseen orientation of the reamer, the trained GN model updates the value of accumulated stress on each of the 20406 tetrahedral elements that constitute our mesh during the prediction task. Once this value is updated, the tetrahedrons to be removed from the mesh are identified using a threshold condition. After using each single-frame output as input for the following prediction repeatedly for up to 60 iterations, our model can maintain an accuracy of up to 90.8% in identifying the status of each element given their value of accumulated stress. Finally, we demonstrate how the developed estimator can be easily connected to any game engine and included in developing a fully functional hip arthroplasty simulator

    Haptic and visual simulation of bone dissection

    Get PDF
    Tesi di dottorato: Università degli Studi di Cagliari, Facoltà di Ingegneria, Dipartiemnto di Ingegneria Meccanica, XV Ciclo di Dottorato in Progettazione Meccanica.In bone dissection virtual simulation, force restitution represents the key to realistically mimicking a patient--specific operating environment. The force is rendered using haptic devices controlled by parametrized mathematical models that represent the bone--burr contact. This dissertation presents and discusses a haptic simulation of a bone cutting burr, that it is being developed as a component of a training system for temporal bone surgery. A physically based model was used to describe the burr--bone interaction, including haptic forces evaluation, bone erosion process and resulting debris. The model was experimentally validated and calibrated by employing a custom experimental set--up consisting of a force--controlled robot arm holding a high--speed rotating tool and a contact force measuring apparatus. Psychophysical testing was also carried out to assess individual reaction to the haptic environment. The results suggest that the simulator is capable of rendering the basic material differences required for bone burring tasks. The current implementation, directly operating on a voxel discretization of patient-specific 3D CT and MR imaging data, is efficient enough to provide real--time haptic and visual feedback on a low--end multi--processing PC platformInedit

    Soft volume simulation using a deformable surface model

    Get PDF
    The aim of the research is to contribute to the modelling of deformable objects, such as soft tissues in medical simulation. Interactive simulation for medical training is a concept undergoing rapid growth as the underlying technologies support the increasingly more realstic and functional training environments. The prominent issues in the deployment of such environments centre on a fine balance between the accuracy of the deformable model and real-time interactivity. Acknowledging the importance of interacting with non-rigid materials such as the palpation of a breast for breast assessment, this thesis has explored the physics-based modelling techniques for both volume and surface approach. This thesis identified that the surface approach based on the mass spring system (MSS) has the benefits of rapid prototyping, reduced mesh complexity, computational efficiency and the support for large material deformation compared to the continuum approach. However, accuracy relative to real material properties is often over looked in the configuration of the resulting model. This thesis has investigated the potential and the feasibility of surface modelling for simulating soft objects regardless of the design of the mesh topology and the non-existence of internal volume discretisation. The assumptions of the material parameters such as elasticity, homogeneity and incompressibility allow a reduced set of material values to be implemented in order to establish the association with the surface configuration. A framework for a deformable surface model was generated in accordance with the issues of the estimation of properties and volume behaviour corresponding to the material parameters. The novel extension to the surface MSS enables the tensile properties of the material to be integrated into an enhanced configuration despite its lack of volume information. The benefits of the reduced complexity of a surface model are now correlated with the improved accuracy in the estimation of properties and volume behaviour. Despite the irregularity of the underlying mesh topology and the absence of volume, the model reflected the original material values and preserved volume with minimal deviations. Global deformation effect which is essential to emulate the run time behaviour of a real soft material upon interaction, such as the palpation of a generic breast, was also demonstrated, thus indicating the potential of this novel technique in the application of soft tissue simulation

    Patient Specific Systems for Computer Assisted Robotic Surgery Simulation, Planning, and Navigation

    Get PDF
    The evolving scenario of surgery: starting from modern surgery, to the birth of medical imaging and the introduction of minimally invasive techniques, has seen in these last years the advent of surgical robotics. These systems, making possible to get through the difficulties of endoscopic surgery, allow an improved surgical performance and a better quality of the intervention. Information technology contributed to this evolution since the beginning of the digital revolution: providing innovative medical imaging devices and computer assisted surgical systems. Afterwards, the progresses in computer graphics brought innovative visualization modalities for medical datasets, and later the birth virtual reality has paved the way for virtual surgery. Although many surgical simulators already exist, there are no patient specific solutions. This thesis presents the development of patient specific software systems for preoperative planning, simulation and intraoperative assistance, designed for robotic surgery: in particular for bimanual robots that are becoming the future of single port interventions. The first software application is a virtual reality simulator for this kind of surgical robots. The system has been designed to validate the initial port placement and the operative workspace for the potential application of this surgical device. Given a bimanual robot with its own geometry and kinematics, and a patient specific 3D virtual anatomy, the surgical simulator allows the surgeon to choose the optimal positioning of the robot and the access port in the abdominal wall. Additionally, it makes possible to evaluate in a virtual environment if a dexterous movability of the robot is achievable, avoiding unwanted collisions with the surrounding anatomy to prevent potential damages in the real surgical procedure. Even if the software has been designed for a specific bimanual surgical robot, it supports any open kinematic chain structure: as far as it can be described in our custom format. The robot capabilities to accomplish specific tasks can be virtually tested using the deformable models: interacting directly with the target virtual organs, trying to avoid unwanted collisions with the surrounding anatomy not involved in the intervention. Moreover, the surgical simulator has been enhanced with algorithms and data structures to integrate biomechanical parameters into virtual deformable models (based on mass-spring-damper network) of target solid organs, in order to properly reproduce the physical behaviour of the patient anatomy during the interactions. The main biomechanical parameters (Young's modulus and density) have been integrated, allowing the automatic tuning of some model network elements, such as: the node mass and the spring stiffness. The spring damping coefficient has been modeled using the Rayleigh approach. Furthermore, the developed method automatically detect the external layer, allowing the usage of both the surface and internal Young's moduli, in order to model the main parts of dense organs: the stroma and the parenchyma. Finally the model can be manually tuned to represent lesion with specific biomechanical properties. Additionally, some software modules of the simulator have been properly extended to be integrated in a patient specific computer guidance system for intraoperative navigation and assistance in robotic single port interventions. This application provides guidance functionalities working in three different modalities: passive as a surgical navigator, assistive as a guide for the single port placement and active as a tutor preventing unwanted collision during the intervention. The simulation system has beed tested by five surgeons: simulating the robot access port placemen, and evaluating the robot movability and workspace inside the patient abdomen. The tested functionalities, rated by expert surgeons, have shown good quality and performance of the simulation. Moreover, the integration of biomechanical parameters into deformable models has beed tested with various material samples. The results have shown a good visual realism ensuring the performance required by an interactive simulation. Finally, the intraoperative navigator has been tested performing a cholecystectomy on a synthetic patient mannequin, in order to evaluate: the intraoperative navigation accuracy, the network communications latency and the overall usability of the system. The tests performed demonstrated the effectiveness and the usability of the software systems developed: encouraging the introduction of the proposed solution in the clinical practice, and the implementation of further improvements. Surgical robotics will be enhanced by an advanced integration of medical images into software systems: allowing the detailed planning of surgical interventions by means of virtual surgery simulation based on patient specific biomechanical parameters. Furthermore, the advanced functionalities offered by these systems, enable surgical robots to improve the intraoperative surgical assistance: benefitting of the knowledge of the virtual patient anatomy

    An Affordable Portable Obstetric Ultrasound Simulator for Synchronous and Asynchronous Scan Training

    Get PDF
    The increasing use of Point of Care (POC) ultrasound presents a challenge in providing efficient training to new POC ultrasound users. In response to this need, we have developed an affordable, compact, laptop-based obstetric ultrasound training simulator. It offers freehand ultrasound scan on an abdomen-sized scan surface with a 5 degrees of freedom sham transducer and utilizes 3D ultrasound image volumes as training material. On the simulator user interface is rendered a virtual torso, whose body surface models the abdomen of a particular pregnant scan subject. A virtual transducer scans the virtual torso, by following the sham transducer movements on the scan surface. The obstetric ultrasound training is self-paced and guided by the simulator using a set of tasks, which are focused on three broad areas, referred to as modules: 1) medical ultrasound basics, 2) orientation to obstetric space, and 3) fetal biometry. A learner completes the scan training through the following three steps: (i) watching demonstration videos, (ii) practicing scan skills by sequentially completing the tasks in Modules 2 and 3, with scan evaluation feedback and help functions available, and (iii) a final scan exercise on new image volumes for assessing the acquired competency. After each training task has been completed, the simulator evaluates whether the task has been carried out correctly or not, by comparing anatomical landmarks identified and/or measured by the learner to reference landmark bounds created by algorithms, or pre-inserted by experienced sonographers. Based on the simulator, an ultrasound E-training system has been developed for the medical practitioners for whom ultrasound training is not accessible at local level. The system, composed of a dedicated server and multiple networked simulators, provides synchronous and asynchronous training modes, and is able to operate with a very low bit rate. The synchronous (or group-learning) mode allows all training participants to observe the same 2D image in real-time, such as a demonstration by an instructor or scan ability of a chosen learner. The synchronization of 2D images on the different simulators is achieved by directly transmitting the position and orientation of the sham transducer, rather than the ultrasound image, and results in a system performance independent of network bandwidth. The asynchronous (or self-learning) mode is described in the previous paragraph. However, the E-training system allows all training participants to stay networked to communicate with each other via text channel. To verify the simulator performance and training efficacy, we conducted several performance experiments and clinical evaluations. The performance experiment results indicated that the simulator was able to generate greater than 30 2D ultrasound images per second with acceptable image quality on medium-priced computers. In our initial experiment investigating the simulator training capability and feasibility, three experienced sonographers individually scanned two image volumes on the simulator. They agreed that the simulated images and the scan experience were adequately realistic for ultrasound training; the training procedure followed standard obstetric ultrasound protocol. They further noted that the simulator had the potential for becoming a good supplemental training tool for medical students and resident doctors. A clinic study investigating the simulator training efficacy was integrated into the clerkship program of the Department of Obstetrics and Gynecology, University of Massachusetts Memorial Medical Center. A total of 24 3rd year medical students were recruited and each of them was directed to scan six image volumes on the simulator in two 2.5-hour sessions. The study results showed that the successful scan times for the training tasks significantly decreased as the training progressed. A post-training survey answered by the students found that they considered the simulator-based training useful and suitable for medical students and resident doctors. The experiment to validate the performance of the E-training system showed that the average transmission bit rate was approximately 3-4 kB/s; the data loss was less than 1% and no loss of 2D images was visually detected. The results also showed that the 2D images on all networked simulators could be considered to be synchronous even though inter-continental communication existed
    corecore