115 research outputs found

    Apple Vision Pro for Healthcare: "The Ultimate Display"? -- Entering the Wonderland of Precision Medicine

    Full text link
    At the Worldwide Developers Conference (WWDC) in June 2023, Apple introduced the Vision Pro. The Vision Pro is a Mixed Reality (MR) headset, more specifically it is a Virtual Reality (VR) device with an additional Video See-Through (VST) capability. The VST capability turns the Vision Pro also into an Augmented Reality (AR) device. The AR feature is enabled by streaming the real world via cameras to the (VR) screens in front of the user's eyes. This is of course not unique and similar to other devices, like the Varjo XR-3. Nevertheless, the Vision Pro has some interesting features, like an inside-out screen that can show the headset wearers' eyes to "outsiders" or a button on the top, called "Digital Crown", that allows you to seamlessly blend digital content with your physical space by turning it. In addition, it is untethered, except for the cable to the battery, which makes the headset more agile, compared to the Varjo XR-3. This could actually come closer to the "Ultimate Display", which Ivan Sutherland had already sketched in 1965. Not available to the public yet, like the Ultimate Display, we want to take a look into the crystal ball in this perspective to see if it can overcome some clinical challenges that - especially - AR still faces in the medical domain, but also go beyond and discuss if the Vision Pro could support clinicians in essential tasks to spend more time with their patients.Comment: This is a Preprint under CC BY. This work was supported by NIH/NIAID R01AI172875, NIH/NCATS UL1 TR001427, the REACT-EU project KITE and enFaced 2.0 (FWF KLI 1044). B. Puladi was funded by the Medical Faculty of the RWTH Aachen University as part of the Clinician Scientist Program. C. Gsaxner was funded by the Advanced Research Opportunities Program from the RWTH Aachen Universit

    Recent Developments and Future Challenges in Medical Mixed Reality

    Get PDF
    As AR technology matures, we have seen many applicationsemerge in entertainment, education and training. However, the useof AR is not yet common in medical practice, despite the great po-tential of this technology to help not only learning and training inmedicine, but also in assisting diagnosis and surgical guidance. Inthis paper, we present recent trends in the use of AR across all med-ical specialties and identify challenges that must be overcome tonarrow the gap between academic research and practical use of ARin medicine. A database of 1403 relevant research papers publishedover the last two decades has been reviewed by using a novel re-search trend analysis method based on text mining algorithm. Wesemantically identified 10 topics including varies of technologiesand applications based on the non-biased and in-personal cluster-ing results from the Latent Dirichlet Allocatio (LDA) model andanalysed the trend of each topic from 1995 to 2015. The statisticresults reveal a taxonomy that can best describes the developmentof the medical AR research during the two decades. And the trendanalysis provide a higher level of view of how the taxonomy haschanged and where the focus will goes. Finally, based on the valu-able results, we provide a insightful discussion to the current limi-tations, challenges and future directions in the field. Our objectiveis to aid researchers to focus on the application areas in medicalAR that are most needed, as well as providing medical practitioners with latest technology advancements

    Multimedia Application: Virtual Reality with 3D Graphics for Interactive Environment in Medical Education

    Get PDF
    Technology evolution and the need for teaching modernization led to the design of virtual reality applications in medical education. The current study aims to create an interactive environment by using three-dimensional (3D) models of the human arterial system. 3D arterial models allow undergraduate medical students to easily memorize main arterial branching pattern after intra-arterial navigation. The students have the ability by using the application for enjoyable interaction during navigation for learning process and continue or repeat the intra-arterial navigation. The study compares two students’ groups by using the criterion whether or not they have followed the anatomy of the arterial system course and were successfully examined to it. The results showed no difference in experience in the evaluation of virtual reality application between the two groups, as well as no gender differences. Digital applications, although complex, offer great advantages, such as learning without jeopardizing human body and the possibility of multiple repetitions, that allows students fully understand the educational subject

    Mobile and Low-cost Hardware Integration in Neurosurgical Image-Guidance

    Get PDF
    It is estimated that 13.8 million patients per year require neurosurgical interventions worldwide, be it for a cerebrovascular disease, stroke, tumour resection, or epilepsy treatment, among others. These procedures involve navigating through and around complex anatomy in an organ where damage to eloquent healthy tissue must be minimized. Neurosurgery thus has very specific constraints compared to most other domains of surgical care. These constraints have made neurosurgery particularly suitable for integrating new technologies. Any new method that has the potential to improve surgical outcomes is worth pursuing, as it has the potential to not only save and prolong lives of patients, but also increase the quality of life post-treatment. In this thesis, novel neurosurgical image-guidance methods are developed, making use of currently available, low-cost off-the-shelf components. In particular, a mobile device (e.g. smartphone or tablet) is integrated into a neuronavigation framework to explore new augmented reality visualization paradigms and novel intuitive interaction methods. The developed tools aim at improving image-guidance using augmented reality to improve intuitiveness and ease of use. Further, we use gestures on the mobile device to increase interactivity with the neuronavigation system in order to provide solutions to the problem of accuracy loss or brain shift that occurs during surgery. Lastly, we explore the effectiveness and accuracy of low-cost hardware components (i.e. tracking systems and ultrasound) that could be used to replace the current high cost hardware that are integrated into commercial image-guided neurosurgery systems. The results of our work show the feasibility of using mobile devices to improve neurosurgical processes. Augmented reality enables surgeons to focus on the surgical field while getting intuitive guidance information. Mobile devices also allow for easy interaction with the neuronavigation system thus enabling surgeons to directly interact with systems in the operating room to improve accuracy and streamline procedures. Lastly, our results show that low-cost components can be integrated into a neurosurgical guidance system at a fraction of the cost, while having a negligible impact on accuracy. The developed methods have the potential to improve surgical workflows, as well as democratize access to higher quality care worldwide

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    Computerized Evaluatution of Microsurgery Skills Training

    Get PDF
    The style of imparting medical training has evolved, over the years. The traditional methods of teaching and practicing basic surgical skills under apprenticeship model, no longer occupy the first place in modern technically demanding advanced surgical disciplines like neurosurgery. Furthermore, the legal and ethical concerns for patient safety as well as cost-effectiveness have forced neurosurgeons to master the necessary microsurgical techniques to accomplish desired results. This has lead to increased emphasis on assessment of clinical and surgical techniques of the neurosurgeons. However, the subjective assessment of microsurgical techniques like micro-suturing under the apprenticeship model cannot be completely unbiased. A few initiatives using computer-based techniques, have been made to introduce objective evaluation of surgical skills. This thesis presents a novel approach involving computerized evaluation of different components of micro-suturing techniques, to eliminate the bias of subjective assessment. The work involved acquisition of cine clips of micro-suturing activity on synthetic material. Image processing and computer vision based techniques were then applied to these videos to assess different characteristics of micro-suturing viz. speed, dexterity and effectualness. In parallel subjective grading on these was done by a senior neurosurgeon. Further correlation and comparative study of both the assessments was done to analyze the efficacy of objective and subjective evaluation

    Interactive 3D Digital Models for Anatomy and Medical Education

    Get PDF
    This chapter explores the creation and use of interactive, three-dimensional (3D), digital models for anatomy and medical education. Firstly, it looks back over the history and development of virtual 3D anatomy resources before outlining some of the current means of their creation; including photogrammetry, CT and surface scanning, and digital modelling, outlining advantages and disadvantages for each. Various means of distribution are explored, including; virtual learning environments, websites, interactive PDF’s, virtual and augmented reality, bespoke applications, and 3D printing, with a particular focus on the level of interactivity each method offers. Finally, and perhaps most importantly, the use of such models for education is discussed. Questions addressed include; How can such models best be used to enhance student learning? How can they be used in the classroom? How can they be used for selfdirected study? As well as exploring if they could one day replace human specimens, and how they complement the rise of online and e-learning

    Light on horizontal interactive surfaces: Input space for tabletop computing

    Get PDF
    In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Máster y Doctorado en la Universidad Carlos III de Madrid, 2010

    Exploration and Implementation of Augmented Reality for External Beam Radiotherapy

    Get PDF
    We have explored applications of Augmented Reality (AR) for external beam radiotherapy to assist with treatment planning, patient education, and treatment delivery. We created an AR development framework for applications in radiotherapy (RADiotherapy Augmented Reality, RAD-AR) for AR ready consumer electronics such as tablet computers and head mounted devices (HMD). We implemented in RAD-AR three tools to assist radiotherapy practitioners with: treatment plans evaluation, patient pre-treatment information/education, and treatment delivery. We estimated accuracy and precision of the patient setup tool and the underlying self-tracking technology, and fidelity of AR content geometric representation, on the Apple iPad tablet computer and the Microsoft HoloLens HMD. Results showed that the technology could already be applied for detection of large treatment setup errors, and could become applicable to other aspects of treatment delivery subject to technological improvements that can be expected in the near future. We performed user feedback studies of the patient education and the plan evaluation tools. Results indicated an overall positive user evaluation of AR technology compared to conventional tools for the radiotherapy elements implemented. We conclude that AR will become a useful tool in radiotherapy bringing real benefits for both clinicians and patients, contributing to successful treatment outcomes

    Physical and statistical shape modelling in craniomaxillofacial surgery: a personalised approach for outcome prediction

    Get PDF
    Orthognathic surgery involves repositioning of the jaw bones to restore face function and shape for patients who require an operation as a result of a syndrome, due to growth disturbances in childhood or after trauma. As part of the preoperative assessment, three-dimensional medical imaging and computer-assisted surgical planning help to improve outcomes, and save time and cost. Computer-assisted surgical planning involves visualisation and manipulation of the patient anatomy and can be used to aid objective diagnosis, patient communication, outcome evaluation, and surgical simulation. Despite the benefits, the adoption of three-dimensional tools has remained limited beyond specialised hospitals and traditional two-dimensional cephalometric analysis is still the gold standard. This thesis presents a multidisciplinary approach to innovative surgical simulation involving clinical patient data, medical image analysis, engineering principles, and state-of-the-art machine learning and computer vision algorithms. Two novel three-dimensional computational models were developed to overcome the limitations of current computer-assisted surgical planning tools. First, a physical modelling approach – based on a probabilistic finite element model – provided patient-specific simulations and, through training and validation, population-specific parameters. The probabilistic model was equally accurate compared to two commercial programs whilst giving additional information regarding uncertainties relating to the material properties and the mismatch in bone position between planning and surgery. Second, a statistical modelling approach was developed that presents a paradigm shift in its modelling formulation and use. Specifically, a 3D morphable model was constructed from 5,000 non-patient and orthognathic patient faces for fully-automated diagnosis and surgical planning. Contrary to traditional physical models that are limited to a finite number of tests, the statistical model employs machine learning algorithms to provide the surgeon with a goal-driven patient-specific surgical plan. The findings in this thesis provide markers for future translational research and may accelerate the adoption of the next generation surgical planning tools to further supplement the clinical decision-making process and ultimately to improve patients’ quality of life
    corecore