240 research outputs found

    The HoloLens in Medicine: A systematic Review and Taxonomy

    Full text link
    The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality display, is the main player in the recent boost in medical augmented reality research. In medical settings, the HoloLens enables the physician to obtain immediate insight into patient information, directly overlaid with their view of the clinical scenario, the medical student to gain a better understanding of complex anatomies or procedures, and even the patient to execute therapeutic tasks with improved, immersive guidance. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021, were attention is shifting towards it's successor, the HoloLens 2. We identified 171 relevant publications through a systematic search of the PubMed and Scopus databases. We analyze these publications in regard to their intended use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation. We find that, although the feasibility of using the HoloLens in various medical scenarios has been shown, increased efforts in the areas of precision, reliability, usability, workflow and perception are necessary to establish AR in clinical practice.Comment: 35 pages, 11 figure

    VR-CHEM Developing a virtual reality interface for molecular modelling

    Get PDF
    VR-CHEM is a prototype for a virtual reality molecular modelling program with a modern 3D user interface. In this thesis, the author discusses the research behind the development of the prototype, provides a detailed description of the program and its features, and reports on the user tests. The research includes reviewing previous programs of a similar category that have appeared in studies in the literature. Some of these are related to chemistry and molecular modelling while others focus on 3D input techniques. Consequently, the prototype contributes by exploring the design of the user interface and how it can affect productivity in this category of programs. The prototype is subjected to a pilot user test to evaluate what further developments are required. Based on this, the thesis proposes that 3D interfaces, while capable of several unique tasks, are yet to overcome some significant drawbacks such as limitations in accuracy and precision. It also suggests that virtual reality can aid in spatial understanding but virtual hands and controllers are far inferior to real hands for even basic tasks due to a lack of tactile feedback

    Hybrid Simulation and Planning Platform for Cryosurgery with Microsoft HoloLens

    Get PDF
    Cryosurgery is a technique of growing popularity involving tissue ablation under controlled freezing. Technological advancement of devices along with surgical technique improvements have turned cryosurgery from an experimental to an established option for treating several diseases. However, cryosurgery is still limited by inaccurate planning based primarily on 2D visualization of the patient's preoperative images. Several works have been aimed at modelling cryoablation through heat transfer simulations; however, most software applications do not meet some key requirements for clinical routine use, such as high computational speed and user-friendliness. This work aims to develop an intuitive platform for anatomical understanding and pre-operative planning by integrating the information content of radiological images and cryoprobe specifications either in a 3D virtual environment (desktop application) or in a hybrid simulator, which exploits the potential of the 3D printing and augmented reality functionalities of Microsoft HoloLens. The proposed platform was preliminarily validated for the retrospective planning/simulation of two surgical cases. Results suggest that the platform is easy and quick to learn and could be used in clinical practice to improve anatomical understanding, to make surgical planning easier than the traditional method, and to strengthen the memorization of surgical planning

    Using machine learning to support better and intelligent visualisation for genomic data

    Get PDF
    Massive amounts of genomic data are created for the advent of Next Generation Sequencing technologies. Great technological advances in methods of characterising the human diseases, including genetic and environmental factors, make it a great opportunity to understand the diseases and to find new diagnoses and treatments. Translating medical data becomes more and more rich and challenging. Visualisation can greatly aid the processing and integration of complex data. Genomic data visual analytics is rapidly evolving alongside with advances in high-throughput technologies such as Artificial Intelligence (AI), and Virtual Reality (VR). Personalised medicine requires new genomic visualisation tools, which can efficiently extract knowledge from the genomic data effectively and speed up expert decisions about the best treatment of an individual patient’s needs. However, meaningful visual analysis of such large genomic data remains a serious challenge. Visualising these complex genomic data requires not only simply plotting of data but should also lead to better decisions. Machine learning has the ability to make prediction and aid in decision-making. Machine learning and visualisation are both effective ways to deal with big data, but they focus on different purposes. Machine learning applies statistical learning techniques to automatically identify patterns in data to make highly accurate prediction, while visualisation can leverage the human perceptual system to interpret and uncover hidden patterns in big data. Clinicians, experts and researchers intend to use both visualisation and machine learning to analyse their complex genomic data, but it is a serious challenge for them to understand and trust machine learning models in the serious medical industry. The main goal of this thesis is to study the feasibility of intelligent and interactive visualisation which combined with machine learning algorithms for medical data analysis. A prototype has also been developed to illustrate the concept that visualising genomics data from childhood cancers in meaningful and dynamic ways could lead to better decisions. Machine learning algorithms are used and illustrated during visualising the cancer genomic data in order to provide highly accurate predictions. This research could open a new and exciting path to discovery for disease diagnostics and therapies

    THE FUTURE OF DIGITAL WORK - USE CASES FOR AUGMENTED REALITY GLASSES

    Get PDF
    Microsoft’s HoloLens enables true augmented reality (AR) by placing virtual objects within the real world. This paper aims at presenting trades (based on ISIC) that can benefit from AR as well as possible use cases. Firstly, the authors conducted a systematic literature search to identi-fy relevant papers. Six databases (including EBSCOhost, ScienceDirect and SpringerLink) were scanned for the term “HoloLens”. Out of 680 results, two researchers identified 150 articles as thematically relevant. Secondly, these papers were analysed utilising qualitative content analy-sis. Findings reveal 26 trades where AR glasses are in use for practice or research purposes. The most frequent are human health, education and research. In addition, we provide a cata-logue of 7 main use cases, such as Process Guidance or Data Access and Visualisation as well as 27 sub use cases addressing corresponding functionalities in more detail. The results of this paper are trades and application scenarios for AR glasses. Thus, this article contributes to re-search in the field of service systems design, especially AR glasses-based service systems, and provide evidence for the future of digital work

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    Review of innovative immersive technologies for healthcare applications

    Get PDF
    Immersive technologies, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), can connect people using enhanced data visualizations to better involve stakeholders as integral members of the process. Immersive technologies have started to change the research on multidimensional genomic data analysis for disease diagnostics and treatments. Immersive technologies are highlighted in some research for health and clinical needs, especially for precision medicine innovation. The use of immersive technology for genomic data analysis has recently received attention from the research community. Genomic data analytics research seeks to integrate immersive technologies to build more natural human-computer interactions that allow better perception engagements. Immersive technologies, especially VR, help humans perceive the digital world as real and give learning output with lower performance errors and higher accuracy. However, there are limited reviews about immersive technologies used in healthcare and genomic data analysis with specific digital health applications. This paper contributes a comprehensive review of using immersive technologies for digital health applications, including patient-centric applications, medical domain education, and data analysis, especially genomic data visual analytics. We highlight the evolution of a visual analysis using VR as a case study for how immersive technologies step, can by step, move into the genomic data analysis domain. The discussion and conclusion summarize the current immersive technology applications’ usability, innovation, and future work in the healthcare domain, and digital health data visual analytics
    • …
    corecore