11 research outputs found

    Intra-operative applications of augmented reality in glioma surgery: a systematic review

    Get PDF
    BackgroundAugmented reality (AR) is increasingly being explored in neurosurgical practice. By visualizing patient-specific, three-dimensional (3D) models in real time, surgeons can improve their spatial understanding of complex anatomy and pathology, thereby optimizing intra-operative navigation, localization, and resection. Here, we aimed to capture applications of AR in glioma surgery, their current status and future potential.MethodsA systematic review of the literature was conducted. This adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline. PubMed, Embase, and Scopus electronic databases were queried from inception to October 10, 2022. Leveraging the Population, Intervention, Comparison, Outcomes, and Study design (PICOS) framework, study eligibility was evaluated in the qualitative synthesis. Data regarding AR workflow, surgical application, and associated outcomes were then extracted. The quality of evidence was additionally examined, using hierarchical classes of evidence in neurosurgery.ResultsThe search returned 77 articles. Forty were subject to title and abstract screening, while 25 proceeded to full text screening. Of these, 22 articles met eligibility criteria and were included in the final review. During abstraction, studies were classified as “development” or “intervention” based on primary aims. Overall, AR was qualitatively advantageous, due to enhanced visualization of gliomas and critical structures, frequently aiding in maximal safe resection. Non-rigid applications were also useful in disclosing and compensating for intra-operative brain shift. Irrespective, there was high variance in registration methods and measurements, which considerably impacted projection accuracy. Most studies were of low-level evidence, yielding heterogeneous results.ConclusionsAR has increasing potential for glioma surgery, with capacity to positively influence the onco-functional balance. However, technical and design limitations are readily apparent. The field must consider the importance of consistency and replicability, as well as the level of evidence, to effectively converge on standard approaches that maximize patient benefit

    Intra-operative applications of augmented reality in glioma surgery: a systematic review

    Get PDF
    Augmented reality (AR) is increasingly being explored in neurosurgical practice. By visualizing patient-specific, three-dimensional (3D) models in real time, surgeons can improve their spatial understanding of complex anatomy and pathology, thereby optimizing intra-operative navigation, localization, and resection. Here, we aimed to capture applications of AR in glioma surgery, their current status and future potential.All research at the Department of Psychiatry in the University of Cambridge was supported by the NIHR Cambridge Biomedical Research Centre (NIHR203312) and the NIHR Applied Research Collaboration East of England.Peer reviewe

    A Review and Selective Analysis of 3D Display Technologies for Anatomical Education

    Get PDF
    The study of anatomy is complex and difficult for students in both graduate and undergraduate education. Researchers have attempted to improve anatomical education with the inclusion of three-dimensional visualization, with the prevailing finding that 3D is beneficial to students. However, there is limited research on the relative efficacy of different 3D modalities, including monoscopic, stereoscopic, and autostereoscopic displays. This study analyzes educational performance, confidence, cognitive load, visual-spatial ability, and technology acceptance in participants using autostereoscopic 3D visualization (holograms), monoscopic 3D visualization (3DPDFs), and a control visualization (2D printed images). Participants were randomized into three treatment groups: holograms (n=60), 3DPDFs (n=60), and printed images (n=59). Participants completed a pre-test followed by a self-study period using the treatment visualization. Immediately following the study period, participants completed the NASA TLX cognitive load instrument, a technology acceptance instrument, visual-spatial ability instruments, a confidence instrument, and a post-test. Post-test results showed the hologram treatment group (Mdn=80.0) performed significantly better than both 3DPDF (Mdn=66.7, p=.008) and printed images (Mdn=66.7, p=.007). Participants in the hologram and 3DPDF treatment groups reported lower cognitive load compared to the printed image treatment (p \u3c .01). Participants also responded more positively towards the holograms than printed images (p \u3c .001). Overall, the holograms demonstrated significant learning improvement over printed images and monoscopic 3DPDF models. This finding suggests additional depth cues from holographic visualization, notably head-motion parallax and stereopsis, provide substantial benefit towards understanding spatial anatomy. The reduction in cognitive load suggests monoscopic and autostereoscopic 3D may utilize the visual system more efficiently than printed images, thereby reducing mental effort during the learning process. Finally, participants reported positive perceptions of holograms suggesting implementation of holographic displays would be met with enthusiasm from student populations. These findings highlight the need for additional studies regarding the effect of novel 3D technologies on learning performance

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum

    Supporting multiple output devices on an ad-hoc basis in visualisation

    Get PDF
    In recent years, new visualisation techniques and devices, such as remote visualisation and stereoscopic displays, have been developed to help researchers. In a remote visualisation environment the user may want to see visualisation on a different device, such as a PDA or stereo device, and in different circumstances. Each device needs to be configured correctly, otherwise it may lead to an incorrect rendering of the output. For end users, however, it can be difficult to configure each device without a knowledge of the device property and rendering. Therefore, in a multiple user and multiple display environment, to obtain the correct display for each device can be a challenge. In this project, the focus on investigating a solution that can support end users to use different display devices easily. The proposed solution is to develop an application that can support the ad-hoc use of any display device without the system being preconfigured in advance. Thus, end users can obtain the correct visualisation output without any complex rendering configuration. We develop a client-server based approach to this problem. The client application can detect the properties of a device and the server application can use these properties to configure the rendering software to generate the correct image for subsequent display on the device. The approach has been evaluated through many tests and the results show that using the application is a useful in helping end users use different display devices in visualisation

    Exploration and Implementation of Augmented Reality for External Beam Radiotherapy

    Get PDF
    We have explored applications of Augmented Reality (AR) for external beam radiotherapy to assist with treatment planning, patient education, and treatment delivery. We created an AR development framework for applications in radiotherapy (RADiotherapy Augmented Reality, RAD-AR) for AR ready consumer electronics such as tablet computers and head mounted devices (HMD). We implemented in RAD-AR three tools to assist radiotherapy practitioners with: treatment plans evaluation, patient pre-treatment information/education, and treatment delivery. We estimated accuracy and precision of the patient setup tool and the underlying self-tracking technology, and fidelity of AR content geometric representation, on the Apple iPad tablet computer and the Microsoft HoloLens HMD. Results showed that the technology could already be applied for detection of large treatment setup errors, and could become applicable to other aspects of treatment delivery subject to technological improvements that can be expected in the near future. We performed user feedback studies of the patient education and the plan evaluation tools. Results indicated an overall positive user evaluation of AR technology compared to conventional tools for the radiotherapy elements implemented. We conclude that AR will become a useful tool in radiotherapy bringing real benefits for both clinicians and patients, contributing to successful treatment outcomes

    Immersive Visualization in Biomedical Computational Fluid Dynamics and Didactic Teaching and Learning

    Get PDF
    Virtual reality (VR) can stimulate active learning, critical thinking, decision making and improved performance. It requires a medium to show virtual content, which is called a virtual environment (VE). The MARquette Visualization Lab (MARVL) is an example of a VE. Robust processes and workflows that allow for the creation of content for use within MARVL further increases the userbase for this valuable resource. A workflow was created to display biomedical computational fluid dynamics (CFD) and complementary data in a wide range of VE’s. This allows a researcher to study the simulation in its natural three-dimensional (3D) morphology. In addition, it is an exciting way to extract more information from CFD results by taking advantage of improved depth cues, a larger display canvas, custom interactivity, and an immersive approach that surrounds the researcher. The CFD to VR workflow was designed to be basic enough for a novice user. It is also used as a tool to foster collaboration between engineers and clinicians. The workflow aimed to support results from common CFD software packages and across clinical research areas. ParaView, Blender and Unity were used in the workflow to take standard CFD files and process them for viewing in VR. Designated scripts were written to automate the steps implemented in each software package. The workflow was successfully completed across multiple biomedical vessels, scales and applications including: the aorta with application to congenital cardiovascular disease, the Circle of Willis with respect to cerebral aneurysms, and the airway for surgical treatment planning. The workflow was completed by novice users in approximately an hour. Bringing VR further into didactic teaching within academia allows students to be fully immersed in their respective subject matter, thereby increasing the students’ sense of presence, understanding and enthusiasm. MARVL is a space for collaborative learning that also offers an immersive, virtual experience. A workflow was created to view PowerPoint presentations in 3D using MARVL. A resulting Immersive PowerPoint workflow used PowerPoint, Unity and other open-source software packages to display the PowerPoint presentations in 3D. The Immersive PowerPoint workflow can be completed in under thirty minutes

    Performance Factors in Neurosurgical Simulation and Augmented Reality Image Guidance

    Get PDF
    Virtual reality surgical simulators have seen widespread adoption in an effort to provide safe, cost-effective and realistic practice of surgical skills. However, the majority of these simulators focus on training low-level technical skills, providing only prototypical surgical cases. For many complex procedures, this approach is deficient in representing anatomical variations that present clinically, failing to challenge users’ higher-level cognitive skills important for navigation and targeting. Surgical simulators offer the means to not only simulate any case conceivable, but to test novel approaches and examine factors that influence performance. Unfortunately, there is a void in the literature surrounding these questions. This thesis was motivated by the need to expand the role of surgical simulators to provide users with clinically relevant scenarios and evaluate human performance in relation to image guidance technologies, patient-specific anatomy, and cognitive abilities. To this end, various tools and methodologies were developed to examine cognitive abilities and knowledge, simulate procedures, and guide complex interventions all within a neurosurgical context. The first chapter provides an introduction to the material. The second chapter describes the development and evaluation of a virtual anatomical training and examination tool. The results suggest that learning occurs and that spatial reasoning ability is an important performance predictor, but subordinate to anatomical knowledge. The third chapter outlines development of automation tools to enable efficient simulation studies and data management. In the fourth chapter, subjects perform abstract targeting tasks on ellipsoid targets with and without augmented reality guidance. While the guidance tool improved accuracy, performance with the tool was strongly tied to target depth estimation – an important consideration for implementation and training with similar guidance tools. In the fifth chapter, neurosurgically experienced subjects were recruited to perform simulated ventriculostomies. Results showed anatomical variations influence performance and could impact outcome. Augmented reality guidance showed no marked improvement in performance, but exhibited a mild learning curve, indicating that additional training may be warranted. The final chapter summarizes the work presented. Our results and novel evaluative methodologies lay the groundwork for further investigation into simulators as versatile research tools to explore performance factors in simulated surgical procedures
    corecore