1,136 research outputs found

    Expertise in medicine: using the expert performance approach to improve simulation training

    Get PDF
    Context We critically review how medical education can benefit from systematic use of the expert performance approach as a framework for measuring and enhancing clinical practice. We discuss how the expert performance approach can be used to better understand the mechanisms underpinning superior performance among health care providers and how the framework can be applied to create simulated learning environments that present increased opportunities to engage in deliberate practice. Expert Performance Approach The expert performance approach is a systematic, evidence-based framework for measuring and analysing superior performance. It has been applied in a variety of domains, but has so far been relatively neglected in medicine and health care. Here we outline the framework and demonstrate how it can be effectively applied to medical education. Deliberate Practice Deliberate practice is defined as a structured and reflective activity, which is designed to develop a critical aspect of performance. Deliberate practice provides an opportunity for error detection and correction, repetition, access to feedback and requires maximal effort, complete concentration and full attention. We provide guidance on how to structure simulated learning environments to encourage the accumulation of deliberate practice. Conclusions We highlight the role of simulation-based training in conjunction with deliberate practice activities such as reflection, rehearsal, trial-and-error learning and feedback in improving the quality of patient care. We argue that the development of expertise in health care is directly related to the systematic identification and improvement of quantifiable performance metrics. In order to optimise the training of expert health care providers, advances in simulation technology need to be coupled with effective instructional systems design, with the latter being strongly guided by empirical research from the learning and cognitive sciences

    Virtual Reality Based Simulation of Hysteroscopic Interventions

    Get PDF
    Virtual reality based simulation is an appealing option to supplement traditional clinical education. However, the formal integration of training simulators into the medical curriculum is still lacking. Especially, the lack of a reasonable level of realism supposedly hinders the widespread use of this technology. Therefore, we try to tackle this situation with a reference surgical simulator of the highest possible fidelity for procedural training. This overview describes all elements that have been combined into our training system as well as first results of simulator validation. Our framework allows the rehearsal of several aspects of hysteroscopy—for instance, correct fluid management, handling of excessive bleeding, appropriate removal of intrauterine tumors, or the use of the surgical instrument

    Microscope Embedded Neurosurgical Training and Intraoperative System

    Get PDF
    In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results. In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point. The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery. This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient. The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability. Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery. All the components are open source or at least based on a GPL license

    Patient Specific Systems for Computer Assisted Robotic Surgery Simulation, Planning, and Navigation

    Get PDF
    The evolving scenario of surgery: starting from modern surgery, to the birth of medical imaging and the introduction of minimally invasive techniques, has seen in these last years the advent of surgical robotics. These systems, making possible to get through the difficulties of endoscopic surgery, allow an improved surgical performance and a better quality of the intervention. Information technology contributed to this evolution since the beginning of the digital revolution: providing innovative medical imaging devices and computer assisted surgical systems. Afterwards, the progresses in computer graphics brought innovative visualization modalities for medical datasets, and later the birth virtual reality has paved the way for virtual surgery. Although many surgical simulators already exist, there are no patient specific solutions. This thesis presents the development of patient specific software systems for preoperative planning, simulation and intraoperative assistance, designed for robotic surgery: in particular for bimanual robots that are becoming the future of single port interventions. The first software application is a virtual reality simulator for this kind of surgical robots. The system has been designed to validate the initial port placement and the operative workspace for the potential application of this surgical device. Given a bimanual robot with its own geometry and kinematics, and a patient specific 3D virtual anatomy, the surgical simulator allows the surgeon to choose the optimal positioning of the robot and the access port in the abdominal wall. Additionally, it makes possible to evaluate in a virtual environment if a dexterous movability of the robot is achievable, avoiding unwanted collisions with the surrounding anatomy to prevent potential damages in the real surgical procedure. Even if the software has been designed for a specific bimanual surgical robot, it supports any open kinematic chain structure: as far as it can be described in our custom format. The robot capabilities to accomplish specific tasks can be virtually tested using the deformable models: interacting directly with the target virtual organs, trying to avoid unwanted collisions with the surrounding anatomy not involved in the intervention. Moreover, the surgical simulator has been enhanced with algorithms and data structures to integrate biomechanical parameters into virtual deformable models (based on mass-spring-damper network) of target solid organs, in order to properly reproduce the physical behaviour of the patient anatomy during the interactions. The main biomechanical parameters (Young's modulus and density) have been integrated, allowing the automatic tuning of some model network elements, such as: the node mass and the spring stiffness. The spring damping coefficient has been modeled using the Rayleigh approach. Furthermore, the developed method automatically detect the external layer, allowing the usage of both the surface and internal Young's moduli, in order to model the main parts of dense organs: the stroma and the parenchyma. Finally the model can be manually tuned to represent lesion with specific biomechanical properties. Additionally, some software modules of the simulator have been properly extended to be integrated in a patient specific computer guidance system for intraoperative navigation and assistance in robotic single port interventions. This application provides guidance functionalities working in three different modalities: passive as a surgical navigator, assistive as a guide for the single port placement and active as a tutor preventing unwanted collision during the intervention. The simulation system has beed tested by five surgeons: simulating the robot access port placemen, and evaluating the robot movability and workspace inside the patient abdomen. The tested functionalities, rated by expert surgeons, have shown good quality and performance of the simulation. Moreover, the integration of biomechanical parameters into deformable models has beed tested with various material samples. The results have shown a good visual realism ensuring the performance required by an interactive simulation. Finally, the intraoperative navigator has been tested performing a cholecystectomy on a synthetic patient mannequin, in order to evaluate: the intraoperative navigation accuracy, the network communications latency and the overall usability of the system. The tests performed demonstrated the effectiveness and the usability of the software systems developed: encouraging the introduction of the proposed solution in the clinical practice, and the implementation of further improvements. Surgical robotics will be enhanced by an advanced integration of medical images into software systems: allowing the detailed planning of surgical interventions by means of virtual surgery simulation based on patient specific biomechanical parameters. Furthermore, the advanced functionalities offered by these systems, enable surgical robots to improve the intraoperative surgical assistance: benefitting of the knowledge of the virtual patient anatomy

    A Virtual University Infrastructure For Orthopaedic Surgical Training With Integrated Simulation

    No full text
    This thesis pivots around the fulcrum of surgical, educational and technological factors. Whilst there is no single conclusion drawn, it is a multidisciplinary thesis exploring the juxtaposition of different academic domains that have a significant influence upon each other. The relationship centres on the engineering and computer science factors in learning technologies for surgery. Following a brief introduction to previous efforts developing surgical simulation, this thesis considers education and learning in orthopaedics, the design and building of a simulator for shoulder surgery. The thesis considers the assessment of such tools and embedding into a virtual learning environment. It explains how the performed experiments clarified issues and their actual significance. This leads to discussion of the work and conclusions are drawn regarding the progress of integration of distributed simulation within the healthcare environment, suggesting how future work can proceed

    A LAYERED FRAMEWORK FOR SURGICAL SIMULATION DEVELOPMENT

    Get PDF
    The field of surgical simulation is still in its infancy, and a number of projects are attempting to take the next step towards becoming the de facto standard for surgical simulation development, an ambition shared by the framework described here. Dubbed AutoMan, this framework has four main goals: a) to provide a common interface to simulation subsystems, b) allow the replacement of these underlying technologies, c) encourage collaboration between independent research projects and, d) expand the on targeted user base of similar frameworks. AutoMan\u27s layered structure provides an abstraction from implementation details providing the common user interface. Being highly modular and built on SOFA, the framework is highly extensible allowing algorithms and modules to be replaced or modified easily. This extensibility encourages collaboration as newly developed modules can be incorporated allowing the framework itself to grow and evolve with the industry. Also, making the programming interface easy to use caters to casual developers who are likely to add functionality to the system

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum
    • …
    corecore