1,065 research outputs found

    Evaluating Human Performance for Image-Guided Surgical Tasks

    Get PDF
    The following work focuses on the objective evaluation of human performance for two different interventional tasks; targeted prostate biopsy tasks using a tracked biopsy device, and external ventricular drain placement tasks using a mobile-based augmented reality device for visualization and guidance. In both tasks, a human performance methodology was utilized which respects the trade-off between speed and accuracy for users conducting a series of targeting tasks using each device. This work outlines the development and application of performance evaluation methods using these devices, as well as details regarding the implementation of the mobile AR application. It was determined that the Fitts’ Law methodology can be applied for evaluation of tasks performed in each surgical scenario, and was sensitive to differentiate performance across a range which spanned experienced and novice users. This methodology is valuable for future development of training modules for these and other medical devices, and can provide details about the underlying characteristics of the devices, and how they can be optimized with respect to human performance

    Performance Factors in Neurosurgical Simulation and Augmented Reality Image Guidance

    Get PDF
    Virtual reality surgical simulators have seen widespread adoption in an effort to provide safe, cost-effective and realistic practice of surgical skills. However, the majority of these simulators focus on training low-level technical skills, providing only prototypical surgical cases. For many complex procedures, this approach is deficient in representing anatomical variations that present clinically, failing to challenge users’ higher-level cognitive skills important for navigation and targeting. Surgical simulators offer the means to not only simulate any case conceivable, but to test novel approaches and examine factors that influence performance. Unfortunately, there is a void in the literature surrounding these questions. This thesis was motivated by the need to expand the role of surgical simulators to provide users with clinically relevant scenarios and evaluate human performance in relation to image guidance technologies, patient-specific anatomy, and cognitive abilities. To this end, various tools and methodologies were developed to examine cognitive abilities and knowledge, simulate procedures, and guide complex interventions all within a neurosurgical context. The first chapter provides an introduction to the material. The second chapter describes the development and evaluation of a virtual anatomical training and examination tool. The results suggest that learning occurs and that spatial reasoning ability is an important performance predictor, but subordinate to anatomical knowledge. The third chapter outlines development of automation tools to enable efficient simulation studies and data management. In the fourth chapter, subjects perform abstract targeting tasks on ellipsoid targets with and without augmented reality guidance. While the guidance tool improved accuracy, performance with the tool was strongly tied to target depth estimation – an important consideration for implementation and training with similar guidance tools. In the fifth chapter, neurosurgically experienced subjects were recruited to perform simulated ventriculostomies. Results showed anatomical variations influence performance and could impact outcome. Augmented reality guidance showed no marked improvement in performance, but exhibited a mild learning curve, indicating that additional training may be warranted. The final chapter summarizes the work presented. Our results and novel evaluative methodologies lay the groundwork for further investigation into simulators as versatile research tools to explore performance factors in simulated surgical procedures

    Design and Evaluation of Neurosurgical Training Simulator

    Get PDF
    Surgical simulators are becoming more important in surgical training. Consumer smartphone technology has improved to allow deployment of VR applications and are now being targeted for medical training simulators. A surgical simulator has been designed using a smartphone, Google cardboard 3D glasses, and the Leap Motion (LM) hand controller. Two expert and 16 novice users were tasked with completing the same pointing tasks using both the LM and the medical simulator NeuroTouch. The novice users had an accuracy of 0.2717 bits (SD 0.3899) and the experts had an accuracy of 0.0925 bits (SD 0.1210) while using the NeuroTouch. Novices and experts improved their accuracy to 0.3585 bits (SD 0.4474) and 0.4581 bits (SD 0.3501) while using the LM. There were some tracking problems with the AR display and LM. Users were intrigued by the AR display and most preferred the LM, as they found it to have better usability

    Design and evaluation of an augmented reality simulator using leap motion

    Get PDF
    Advances in virtual and augmented reality (AR) are having an impact on the medical field in areas such as surgical simulation. Improvements to surgical simulation will provide students and residents with additional training and evaluation methods. This is particularly important for procedures such as the endoscopic third ventriculostomy (ETV), which residents perform regularly. Simulators such as NeuroTouch, have been designed to aid in training associated with this procedure. The authors have designed an affordable and easily accessible ETV simulator, and compare it with the existing NeuroTouch for its usability and training effectiveness. This simulator was developed using Unity, Vuforia and the leap motion (LM) for an AR environment. The participants, 16 novices and two expert neurosurgeons, were asked to complete 40 targeting tasks. Participants used the NeuroTouch tool or a virtual hand controlled by the LM to select the position and orientation for these tasks. The length of time to complete each task was recorded and the trajectory log files were used to calculate performance. The resulting data from the novices\u27 and experts\u27 speed and accuracy are compared, and they discuss the objective performance of training in terms of the speed and accuracy of targeting accuracy for each system

    A Mobile Augmented Reality Application for Image Guidance of Neurosurgical Interventions

    Get PDF
    Abstract Image guidance for co mplex surgical procedures is gaining popularity with in operating rooms. Providing the appropriate contextual information to aid in navigation can reduce cognitive load on surgeons, thus reducing surgical error. To date, clinical imp lementations of image guidance have required extensive equip ment, setup and technical expert ise to operate precluding their use when treat ing acute conditions in the intensive care unit. We present an application targeted at mobile p latforms that utilizes augmented reality and image-based tracking in order to add preoperative contextual informat ion to neurosurgical procedures, specifically spatial information. A pilot evaluation was perfo rmed to examine accuracy of the system. Init ial results show increased accuracy for a targeting task with the aid of the visualizat ion

    Mobile and Low-cost Hardware Integration in Neurosurgical Image-Guidance

    Get PDF
    It is estimated that 13.8 million patients per year require neurosurgical interventions worldwide, be it for a cerebrovascular disease, stroke, tumour resection, or epilepsy treatment, among others. These procedures involve navigating through and around complex anatomy in an organ where damage to eloquent healthy tissue must be minimized. Neurosurgery thus has very specific constraints compared to most other domains of surgical care. These constraints have made neurosurgery particularly suitable for integrating new technologies. Any new method that has the potential to improve surgical outcomes is worth pursuing, as it has the potential to not only save and prolong lives of patients, but also increase the quality of life post-treatment. In this thesis, novel neurosurgical image-guidance methods are developed, making use of currently available, low-cost off-the-shelf components. In particular, a mobile device (e.g. smartphone or tablet) is integrated into a neuronavigation framework to explore new augmented reality visualization paradigms and novel intuitive interaction methods. The developed tools aim at improving image-guidance using augmented reality to improve intuitiveness and ease of use. Further, we use gestures on the mobile device to increase interactivity with the neuronavigation system in order to provide solutions to the problem of accuracy loss or brain shift that occurs during surgery. Lastly, we explore the effectiveness and accuracy of low-cost hardware components (i.e. tracking systems and ultrasound) that could be used to replace the current high cost hardware that are integrated into commercial image-guided neurosurgery systems. The results of our work show the feasibility of using mobile devices to improve neurosurgical processes. Augmented reality enables surgeons to focus on the surgical field while getting intuitive guidance information. Mobile devices also allow for easy interaction with the neuronavigation system thus enabling surgeons to directly interact with systems in the operating room to improve accuracy and streamline procedures. Lastly, our results show that low-cost components can be integrated into a neurosurgical guidance system at a fraction of the cost, while having a negligible impact on accuracy. The developed methods have the potential to improve surgical workflows, as well as democratize access to higher quality care worldwide

    Recent Developments and Future Challenges in Medical Mixed Reality

    Get PDF
    As AR technology matures, we have seen many applicationsemerge in entertainment, education and training. However, the useof AR is not yet common in medical practice, despite the great po-tential of this technology to help not only learning and training inmedicine, but also in assisting diagnosis and surgical guidance. Inthis paper, we present recent trends in the use of AR across all med-ical specialties and identify challenges that must be overcome tonarrow the gap between academic research and practical use of ARin medicine. A database of 1403 relevant research papers publishedover the last two decades has been reviewed by using a novel re-search trend analysis method based on text mining algorithm. Wesemantically identified 10 topics including varies of technologiesand applications based on the non-biased and in-personal cluster-ing results from the Latent Dirichlet Allocatio (LDA) model andanalysed the trend of each topic from 1995 to 2015. The statisticresults reveal a taxonomy that can best describes the developmentof the medical AR research during the two decades. And the trendanalysis provide a higher level of view of how the taxonomy haschanged and where the focus will goes. Finally, based on the valu-able results, we provide a insightful discussion to the current limi-tations, challenges and future directions in the field. Our objectiveis to aid researchers to focus on the application areas in medicalAR that are most needed, as well as providing medical practitioners with latest technology advancements

    Application of mixed reality to ultrasound-guided femoral arterial cannulation during real-time practice in cardiac interventions

    Get PDF
    Producción CientíficaMixed reality opens interesting possibilities as it allows physicians to interact with both, the real physical and the virtual computer-generated environment and objects, in a powerful way. A mixed reality system, based in the HoloLens 2 glasses, has been developed to assist cardiologists in a quite complex interventional procedure: the ultrasound-guided femoral arterial cannulations, during real-time practice in interventional cardiology. The system is divided into two modules, the transmitter module, responsible for sending medical images to HoloLens 2 glasses, and the receiver module, hosted in the HoloLens 2, which renders those medical images, allowing the practitioner to watch and manage them in a 3D environment. The system has been successfully used, between November 2021 and August 2022, in up to 9 interventions by 2 different practitioners, in a large public hospital in central Spain. The practitioners using the system confirmed it as easy to use, reliable, real-time, reachable, and cost-effective, allowing a reduction of operating times, a better control of typical errors associated to the interventional procedure, and opening the possibility to use the medical imagery produced in ubiquitous e-learning. These strengths and opportunities were only nuanced by the risk of potential medical complications emerging from system malfunction or operator errors when using the system (e.g., unexpected momentary lag). In summary, the proposed system can be taken as a realistic proof of concept of how mixed reality technologies can support practitioners when performing interventional and surgical procedures during real-time daily practice.Junta de Castilla y León - Gerencia Regional de Salud (SACyL) (grant number GRS 2275/A/2020)Instituto de Salud Carlos III (grant number DTS21/00158)Publicación en abierto financiada por el Consorcio de Bibliotecas Universitarias de Castilla y León (BUCLE), con cargo al Programa Operativo 2014ES16RFOP009 FEDER 2014-2020 DE CASTILLA Y LEÓN, Actuación:20007-CL - Apoyo Consorcio BUCL

    Augmented Reality in Ventriculostomy

    Get PDF
    Freehand ventriculostomy is one of the most common neurological procedures performed when the cerebrospinal uid increases in the ventricular system. This procedure is most often performed in the emergency room or intensive care unit and thus without a navigation system to help surgeons locate the ventricles. Surgeons instead use anatomical landmarks on the face and skull to determine the best location of the burr hole and trajectory for moving catheter through the brain to the ventricles to drain excess cerebrospinal uid (CSF) and decrease intracranial pressure (ICP). Freehand ventriculostomy has an associated catheter misplacement rate of over 30% which can lead to a number of complications including mortality and morbidity. In this dissertation, we propose an augmented-reality pipeline for ventriculostomy using an optical-see-through head-mounted device, the Microsoft HoloLens. Our system, projects a 3D constructed model of the patient's skull and ventricles directly onto the patient's head to guide the surgeon to locate a target on the ventricle. As part of this pipeline, we implemented an API to send real-time tracking information from the optical tracker to the the HoloLens, provided a manual gesture-based registration method, as well as a colored-based depth visualization to help users understand the spatial relationship between the patient's ventricular anatomy and surgical tool. In a study with 15 subjects, we found that the proposed gesture-based registration has an accuracy of 10:75 millimeters and target hitting accuracy of 12:28 millimeters. In terms of usability, our developed system received a score of 74.5 on the System usability scale (SUS), indicating that the system is easily usable. Our preliminary results suggest that augmented-reality systems can be helpful for neuronavigation procedures that require target localization

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum
    corecore