99 research outputs found

    Real-time hybrid cutting with dynamic fluid visualization for virtual surgery

    Get PDF
    It is widely accepted that a reform in medical teaching must be made to meet today's high volume training requirements. Virtual simulation offers a potential method of providing such trainings and some current medical training simulations integrate haptic and visual feedback to enhance procedure learning. The purpose of this project is to explore the capability of Virtual Reality (VR) technology to develop a training simulator for surgical cutting and bleeding in a general surgery

    Virtual Reality Simulator for Training in Myringotomy with Tube Placement

    Get PDF
    Myringotomy refers to a surgical incision in the eardrum, and it is often followed by ventilation tube placement to treat middle-ear infections. The procedure is difficult to learn; hence, the objectives of this work were to develop a virtual-reality training simulator, assess its face and content validity, and implement quantitative performance metrics and assess construct validity. A commercial digital gaming engine (Unity3D) was used to implement the simulator with support for 3D visualization of digital ear models and support for major surgical tasks. A haptic arm co-located with the stereo scene was used to manipulate virtual surgical tools and to provide force feedback. A questionnaire was developed with 14 face validity questions focusing on realism and 6 content validity questions focusing on training potential. Twelve participants from the Department of Otolaryngology were recruited for the study. Responses to 12 of the 14 face validity questions were positive. One concern was with contact modeling related to tube insertion into the eardrum, and the second was with movement of the blade and forceps. The former could be resolved by using a higher resolution digital model for the eardrum to improve contact localization. The latter could be resolved by using a higher fidelity haptic device. With regard to content validity, 64% of the responses were positive, 21% were neutral, and 15% were negative. In the final phase of this work, automated performance metrics were programmed and a construct validity study was conducted with 11 participants: 4 senior Otolaryngology consultants and 7 junior Otolaryngology residents. Each participant performed 10 procedures on the simulator and metrics were automatically collected. Senior Otolaryngologists took significantly less time to completion compared to junior residents. Junior residents had 2.8 times more errors as compared to experienced surgeons. The senior surgeons also had significantly longer incision lengths, more accurate incision angles, and lower magnification keeping both the umbo and annulus in view. All metrics were able to discriminate senior Otolaryngologists from junior residents with a significance of p \u3c 0.002. The simulator has sufficient realism, training potential and performance discrimination ability to warrant a more resource intensive skills transference study

    A virtual training simulator for learning cataract surgery with phacoemulsification

    Get PDF
    Author name used in this publication: Fu-Lai Chung2009-2010 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Assessment of a novel patient-specific 3D printed multi-material simulator for endoscopic sinus surgery

    Get PDF
    Background: Three-dimensional (3D) printing is an emerging tool in the creation of anatomical models for surgical training. Its use in endoscopic sinus surgery (ESS) has been limited because of the difficulty in replicating the anatomical details. Aim: To describe the development of a patient-specific 3D printed multi-material simulator for use in ESS, and to validate it as a training tool among a group of residents and experts in ear-nose-throat (ENT) surgery. Methods: Advanced material jetting 3D printing technology was used to produce both soft tissues and bony structures of the simulator to increase anatomical realism and tactile feedback of the model. A total of 3 ENT residents and 9 ENT specialists were recruited to perform both non-destructive tasks and ESS steps on the model. The anatomical fidelity and the usefulness of the simulator in ESS training were evaluated through specific questionnaires. Results: The tasks were accomplished by 100% of participants and the survey showed overall high scores both for anatomy fidelity and usefulness in training. Dacryocystorhinostomy, medial antrostomy, and turbinectomy were rated as accurately replicable on the simulator by 75% of participants. Positive scores were obtained also for ethmoidectomy and DRAF procedures, while the replication of sphenoidotomy received neutral ratings by half of the participants. Conclusion: This study demonstrates that a 3D printed multi-material model of the sino-nasal anatomy can be generated with a high level of anatomical accuracy and haptic response. This technology has the potential to be useful in surgical training as an alternative or complementary tool to cadaveric dissection

    GPU Implementation of extended total Lagrangian explicit (gpuXTLED) for Surgical Incision Application

    Get PDF
    An extended total Lagrangian explicit dynamic (XTLED) is presented as a potential numerical method for simulating interactive or physics-based surgical incisions of soft tissues. The simulation of surgical incision is vital to the integrity of virtual reality simulators that are used for immersive surgical training. However, most existing numerical methods either compromise on computational speed for accuracy or vice versa. This is due to the challenge of modelling nonlinear behaviour of soft tissues, incorporating incision and subsequently updating topology to account for the incision. To tackle these challenges, XTLED method which combines the extended finite element method (XFEM) using total Lagrangian formulation with explicit time integration method was developed. The algorithm was developed and deformations of 3D geometries under tension, were simulated. An attempt was made to validate the XTLED method using silicon samples with different incision configuration and a comparison was made between XTLED and FEM. Results show that XTLED could potentially be used to simulate interactive soft tissue incision. However, further quantitative verification and validation are required. In addition, numerical analyses conducted show that solutions may not be obtainable due to simulation errors. However, it is unclear whether these errors are inherent in the XTLED method or the algorithm created for the XTLED method in this thesis

    Real-time simulation of surgery by Proper Generalized Decomposition techniques

    Get PDF
    La simulación quirúrgica por ordenador en tiempo real se ha convertido en una alternativa muy atractiva a los simuladores quirúrgicos tradicionales. Entre otras ventajas, los simuladores por ordenador consiguen ahorros importantes de tiempo y de costes de mantenimiento, y permiten que los estudiantes practiquen sus habilidades quirúrgicas en un entorno seguro tantas veces como sea necesario. Sin embargo, a pesar de las capacidades de los ordenadores actuales, la cirugía computacional sigue siendo un campo de investigación exigente. Uno de sus mayores retos es la alta velocidad a la que se tienen que resolver complejos problemas de mecánica de medios continuos para que los interfaces hápticos puedan proporcionar un sentido del tacto realista (en general, se necesitan velocidades de respuesta de 500-1000 Hz).Esta tesis presenta algunos métodos numéricos novedosos para la simulación interactiva de dos procedimientos quirúrgicos habituales: el corte y el rasgado (o desgarro) de tejidos blandos. El marco común de los métodos presentados es el uso de la Descomposición Propia Generalizada (PGD en inglés) para la generación de vademécums computacionales, esto es, metasoluciones generales de problemas paramétricos de altas dimensiones que se pueden evaluar a velocidades de respuesta compatibles con entornos hápticos.En el caso del corte, los vademécums computacionales se utilizan de forma conjunta con técnicas basadas en XFEM, mientras que la carga de cálculo se distribuye entre una etapa off-line (previa a la ejecución interactiva) y otra on-line (en tiempo de ejecución). Durante la fase off-line, para el órgano en cuestión se precalculan tanto un vademécum computacional para cualquier posición de una carga, como los desplazamientos producidos por un conjunto de cortes. Así, durante la etapa on-line, los resultados precalculados se combinan de la forma más adecuada para obtener en tiempo real la respuesta a las acciones dirigidas por el usuario. En cuanto al rasgado, a partir de una ecuación paramétrica basada en mecánica del daño continuo, se obtiene un vademécum computacional. La complejidad del modelo se reduce mediante técnicas de Descomposición Ortogonal Propia (POD en inglés), y el vademécum se incorpora a una formulación incremental explícita que se puede interpretar como una especie de integrador temporal.A modo de ejemplo, el método para el corte se aplica a la simulación de un procedimiento quirúrgico refractivo de la córnea conocido como queratotomía radial, mientras que el método para el rasgado se centra en la simulación de la colecistectomía laparoscópica (la extirpación de la vesícula biliar mediante laparoscopia). En ambos casos, los métodos implementados ofrecen excelentes resultados en términos de velocidades de respuesta y producen simulaciones muy realistas desde los puntos de vista visual y háptico.The real-time computer-based simulation of surgery has proven to be an appealing alternative to traditional surgical simulators. Amongst other advantages, computer-based simulators provide considerable savings on time and maintenance costs, and allow trainees to practice their surgical skills in a safe environment as often as necessary. However, in spite of the current computer capabilities, computational surgery continues to be a challenging field of research. One of its major issues is the high speed at which complex problems in continuum mechanics have to be solved so that haptic interfaces can render a realistic sense of touch (generally, feedback rates of 500–1 000 Hz are required). This thesis introduces some novel numerical methods for the interactive simulation of two usual surgical procedures: cutting and tearing of soft tissues. The common framework of the presented methods is the use of the Proper Generalised Decomposition (PGD) for the generation of computational vademecums, i. e. general meta-solutions of parametric high-dimensional problems that can be evaluated at feedback rates compatible with haptic environments. In the case of cutting, computational vademecums are used jointly with XFEM-based techniques, and the computing workload is distributed into an off-line and an on-line stage. During the off-line stage, both a computational vademecum for any position of a load and the displacements produced by a set of cuts are pre-computed for the organ under consideration. Thus, during the on-line stage, the pre-computed results are properly combined together to obtain in real-time the response to the actions driven by the user. Concerning tearing, a computational vademecum is obtained from a parametric equation based on continuum damage mechanics. The complexity of the model is reduced by Proper Orthogonal Decomposition (POD) techniques, and the vademecum is incorporated into an explicit incremental formulation that can be viewed as a sort of time integrator. By way of example, the cutting method is applied to the simulation of a corneal refractive surgical procedure known as radial keratotomy, whereas the tearing method focuses on the simulation of laparoscopic cholecystectomy (i. e. the removal of the gallbladder). In both cases, the implemented methods offer excellent performances in terms of feedback rates, and produce.<br /

    Development and Validation Methodology of the Nuss Procedure Surgical Planner

    Get PDF
    Pectus excavatum (PE) is a congenital chest wall deformity which is characterized, in most cases, by a deep depression of the sternum. A minimally invasive technique for the repair of PE (MIRPE), often referred to as the Nuss procedure, has been proven to be more advantageous than many other PE treatment techniques. The Nuss procedure consists of placement of a metal bar(s) underneath the sternum, thereby forcibly changing the geometry of the ribcage. Because of the prevalence of PE and the popularity of the Nuss procedure, the demand to perform this surgery is greater than ever. Therefore, a Nuss procedure surgical planner would be an invaluable planning tool ensuring an optimal physiological and aesthetic outcome. In this dissertation, the development and validation of the Nuss procedure planner is investigated. First, a generic model of the ribcage is developed to overcome the issue of missing cartilage when PE ribcages are segmented and facilitate the flexibility of the model to accommodate a range of deformity. Then, the CT data collected from actual patients with PE is used to create a set of patient specific finite element models. Based on finite element analyses performed over those models, a set force-displacement data set is created. This data is used to train an artificial neural network to generalize the data set. In order to evaluate the planning process, a methodology which uses an average shape of the chest for comparison with results of the Nuss procedure planner is developed. This method is based on a sample of normal chests obtained from the ODU male population using laser surface scanning and overcomes challenging issues such as hole-filling, scan registration and consistency. Additionally, this planning simulator is optimized so that it can be used for training purposes. Haptic feedback and inertial tracking is implemented, and the force-displacement model is approximated using a neural network approach and evaluated for real-time performance. The results show that it is possible to utilize this approximation of the force-displacement model for the Nuss procedure simulator. The detailed ribcage model achieves real-time performance

    Microscope Embedded Neurosurgical Training and Intraoperative System

    Get PDF
    In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results. In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point. The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery. This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient. The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability. Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery. All the components are open source or at least based on a GPL license
    corecore