125 research outputs found

    Virtual and Augmented Reality in Neurosurgery: The Evolution of its Application and Study Designs.

    Get PDF
    BACKGROUND: As the art of neurosurgery evolves in the 21st century, more emphasis is placed on minimally invasive techniques, which require technical precision. Simultaneously, the reduction on training hours continues, and teachers of neurosurgery faces double jeopardy -with harder skills to teach and less time to teach them. Mixed reality appears as the neurosurgical educators\u27 natural ally: Virtual reality facilitates the learning of spatial relationships and permits rehearsal of skills, while augmented reality can make procedures safer and more efficient. Little wonder then, that the body of literature on mixed reality in neurosurgery has grown exponentially. METHODS: Publications involving virtual and augmented reality in neurosurgery were examined. A total of 414 papers were included, and they were categorized according to study design and analyzed. RESULTS: Half of the papers were published within the last 3 years alone. Whereas in the earlier half, most of the publications involved experiments in virtual reality simulation and the efficacy of skills acquisition, many of the more recent publication are proof-of-concept studies. This attests to the evolution of mixed reality in neurosurgery. As the technology advances, neurosurgeons are finding more applications, both in training and clinical practice. CONCLUSIONS: With parallel advancement in Internet speed and artificial intelligence, the utilization of mixed reality will permeate neurosurgery. From solving staff problems in global neurosurgery, to mitigating the deleterious effect of duty-hour reductions, to improving individual operations, mixed reality will have a positive effect in many aspects of neurosurgery

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum

    Augmented and virtual reality in surgery—the digital surgical environment:applications, limitations and legal pitfalls

    Get PDF
    The continuing enhancement of the surgical environment in the digital age has led to a number of innovations being highlighted as potential disruptive technologies in the surgical workplace. Augmented reality (AR) and virtual reality (VR) are rapidly becoming increasingly available, accessible and importantly affordable, hence their application into healthcare to enhance the medical use of data is certain. Whether it relates to anatomy, intraoperative surgery, or post-operative rehabilitation, applications are already being investigated for their role in the surgeons armamentarium. Here we provide an introduction to the technology and the potential areas of development in the surgical arena

    Recent Developments and Future Challenges in Medical Mixed Reality

    Get PDF
    As AR technology matures, we have seen many applicationsemerge in entertainment, education and training. However, the useof AR is not yet common in medical practice, despite the great po-tential of this technology to help not only learning and training inmedicine, but also in assisting diagnosis and surgical guidance. Inthis paper, we present recent trends in the use of AR across all med-ical specialties and identify challenges that must be overcome tonarrow the gap between academic research and practical use of ARin medicine. A database of 1403 relevant research papers publishedover the last two decades has been reviewed by using a novel re-search trend analysis method based on text mining algorithm. Wesemantically identified 10 topics including varies of technologiesand applications based on the non-biased and in-personal cluster-ing results from the Latent Dirichlet Allocatio (LDA) model andanalysed the trend of each topic from 1995 to 2015. The statisticresults reveal a taxonomy that can best describes the developmentof the medical AR research during the two decades. And the trendanalysis provide a higher level of view of how the taxonomy haschanged and where the focus will goes. Finally, based on the valu-able results, we provide a insightful discussion to the current limi-tations, challenges and future directions in the field. Our objectiveis to aid researchers to focus on the application areas in medicalAR that are most needed, as well as providing medical practitioners with latest technology advancements

    Binocular Goggle Augmented Imaging and Navigation System provides real-time fluorescence image guidance for tumor resection and sentinel lymph node mapping

    Get PDF
    The inability to identify microscopic tumors and assess surgical margins in real-time during oncologic surgery leads to incomplete tumor removal, increases the chances of tumor recurrence, and necessitates costly repeat surgery. To overcome these challenges, we have developed a wearable goggle augmented imaging and navigation system (GAINS) that can provide accurate intraoperative visualization of tumors and sentinel lymph nodes in real-time without disrupting normal surgical workflow. GAINS projects both near-infrared fluorescence from tumors and the natural color images of tissue onto a head-mounted display without latency. Aided by tumor-targeted contrast agents, the system detected tumors in subcutaneous and metastatic mouse models with high accuracy (sensitivity = 100%, specificity = 98% ± 5% standard deviation). Human pilot studies in breast cancer and melanoma patients using a near-infrared dye show that the GAINS detected sentinel lymph nodes with 100% sensitivity. Clinical use of the GAINS to guide tumor resection and sentinel lymph node mapping promises to improve surgical outcomes, reduce rates of repeat surgery, and improve the accuracy of cancer staging

    Goggle Augmented Imaging and Navigation System for Fluorescence-Guided Surgery

    Get PDF
    Surgery remains the only curative option for most solid tumors. The standard-of-care usually involves tumor resection and sentinel lymph node biopsy for cancer staging. Surgeons rely on their vision and touch to distinguish healthy from cancer tissue during surgery, often leading to incomplete tumor resection that necessitates repeat surgery. Sentinel lymph node biopsy by conventional radioactive tracking exposes patients and caregivers to ionizing radiation, while blue dye tracking stains the tissue highlighting only superficial lymph nodes. Improper identification of sentinel lymph nodes may misdiagnose the stage of the cancer. Therefore there is a clinical need for accurate intraoperative tumor and sentinel lymph node visualization. Conventional imaging modalities such as x-ray computed tomography, positron emission tomography, magnetic resonance imaging, and ultrasound are excellent for preoperative cancer diagnosis and surgical planning. However, they are not suitable for intraoperative use, due to bulky complicated hardware, high cost, non-real-time imaging, severe restrictions to the surgical workflow and lack of sufficient resolution for tumor boundary assessment. This has propelled interest in fluorescence-guided surgery, due to availability of simple hardware that can achieve real-time, high resolution and sensitive imaging. Near-infrared fluorescence imaging is of particular interest due to low background absorbance by photoactive biomolecules, enabling thick tissue assessment. As a result several near-infrared fluorescence-guided surgery systems have been developed. However, they are limited by bulky hardware, disruptive information display and non-matched field of view to the user. To address these limitations we have developed a compact, light-weight and wearable goggle augmented imaging and navigation system (GAINS). It detects the near-infrared fluorescence from a tumor accumulated contrast agent, along with the normal color view and displays accurately aligned, color-fluorescence images via a head-mounted display worn by the surgeon, in real-time. GAINS is a platform technology and capable of very sensitive fluorescence detection. Image display options include both video see-through and optical see-through head-mounted displays for high-contrast image guidance as well as direct visual access to the surgical bed. Image capture options from large field of view camera as well high magnification handheld microscope, ensures macroscopic as well as microscopic assessment of the tumor bed. Aided by tumor targeted near-infrared contrast agents, GAINS guided complete tumor resection in subcutaneous, metastatic and spontaneous mouse models of cancer with high sensitivity and specificity, in real-time. Using a clinically-approved near-infrared contrast agent, GAINS provided real-time image guidance for accurate visualization of lymph nodes in a porcine model and sentinel lymph nodes in human breast cancer and melanoma patients with high sensitivity. This work has addressed issues that have limited clinical adoption of fluorescence-guided surgery and paved the way for research into developing this approach towards standard-of-care practice that can potentially improve surgical outcomes in cancer

    Vision-Aided Indoor Pedestrian Dead Reckoning

    Get PDF
    Vision-aided inertial navigation has become a more popular method for indoor positioning recently. This popularity is basically due to the development of light-weighted and low-cost Micro Electro-Mechanical Systems (MEMS) as well as advancement and availability of CCD cameras in public indoor area. While the use of inertial sensors and cameras are limited to the challenge of drift accumulation and object detection in line of sight, respectively, the integration of these two sensors can compensate their drawbacks and provide more accurate positioning solutions. This study builds up upon earlier research on “Vision-Aided Indoor Pedestrian Tracking System”, to address challenges of indoor positioning by providing more accurate and seamless solutions. The study improves the overall design and implementation of inertial sensor fusion for indoor applications. In this regard, genuine indoor maps and geographical information, i.e. digitized floor plans, are used for visual tracking application the pilot study. Both of inertial positioning and visual tracking components can work stand-alone with additional location information from the maps. In addition, while the visual tracking component can help to calibrate pedestrian dead reckoning and provides better accuracy, inertial sensing module can alternatively be used for positioning and tracking when the user cannot be detected by the camera until being detected in video again. The mean accuracy of this positioning system is 10.98% higher than uncalibrated inertial positioning during experiment

    Immersive 360° video for forensic education

    Get PDF
    Throughout the globe, training in the investigation of forensic crime scene work is a vital part of the overall training process within Police Academies and forensic programs throughout the world. However, the exposure of trainee forensic officers to real life scenes, by instructors, is minimal due to the delicate nature of information presented within them and the overall difficulty of Forensic investigations. Virtual Reality (VR) is computer technology utilising headsets, to produce lifelike imageries, sounds and perceptions simulating physical presence inside a virtual setting to a user. The user is able to look around the virtual world and often interact with virtual landscapes or objects. VR headsets are head‐mounted goggles with a screen in front of the eyes (Burdea & Coffet 2003). The use of VR varies widely from personal gaming to classroom learning. Uses also include computerised tools that are used solely online. The current use of VR within Forensic Science is that it is used widely in several capacities that include the training and examination of new forensic officers. However, there is minimal review and authentication of the efficiency of VR use for the teaching of forensic investigation. This is surprising, as the VR field has experienced rapid expansion in the educating of many varying fields over the past few years. Even though VR could enhance forensic training by offering another, perhaps more versatile, engaging way of learning, no devoted VR application has yet been commercially implemented for forensic examination education. Research into VR is a fairly young field, however the technology and use of it is still rapidly growing and the improvement of interactive tools is inevitably having an impact on all facets of learning and teaching
    corecore