23 research outputs found

    Towards markerless orthopaedic navigation with intuitive Optical See-through Head-mounted displays

    Get PDF
    The potential of image-guided orthopaedic navigation to improve surgical outcomes has been well-recognised during the last two decades. According to the tracked pose of target bone, the anatomical information and preoperative plans are updated and displayed to surgeons, so that they can follow the guidance to reach the goal with higher accuracy, efficiency and reproducibility. Despite their success, current orthopaedic navigation systems have two main limitations: for target tracking, artificial markers have to be drilled into the bone and calibrated manually to the bone, which introduces the risk of additional harm to patients and increases operating complexity; for guidance visualisation, surgeons have to shift their attention from the patient to an external 2D monitor, which is disruptive and can be mentally stressful. Motivated by these limitations, this thesis explores the development of an intuitive, compact and reliable navigation system for orthopaedic surgery. To this end, conventional marker-based tracking is replaced by a novel markerless tracking algorithm, and the 2D display is replaced by a 3D holographic Optical see-through (OST) Head-mounted display (HMD) precisely calibrated to a user's perspective. Our markerless tracking, facilitated by a commercial RGBD camera, is achieved through deep learning-based bone segmentation followed by real-time pose registration. For robust segmentation, a new network is designed and efficiently augmented by a synthetic dataset. Our segmentation network outperforms the state-of-the-art regarding occlusion-robustness, device-agnostic behaviour, and target generalisability. For reliable pose registration, a novel Bounded Iterative Closest Point (BICP) workflow is proposed. The improved markerless tracking can achieve a clinically acceptable error of 0.95 deg and 2.17 mm according to a phantom test. OST displays allow ubiquitous enrichment of perceived real world with contextually blended virtual aids through semi-transparent glasses. They have been recognised as a suitable visual tool for surgical assistance, since they do not hinder the surgeon's natural eyesight and require no attention shift or perspective conversion. The OST calibration is crucial to ensure locational-coherent surgical guidance. Current calibration methods are either human error-prone or hardly applicable to commercial devices. To this end, we propose an offline camera-based calibration method that is highly accurate yet easy to implement in commercial products, and an online alignment-based refinement that is user-centric and robust against user error. The proposed methods are proven to be superior to other similar State-of- the-art (SOTA)s regarding calibration convenience and display accuracy. Motivated by the ambition to develop the world's first markerless OST navigation system, we integrated the developed markerless tracking and calibration scheme into a complete navigation workflow designed for femur drilling tasks during knee replacement surgery. We verify the usability of our designed OST system with an experienced orthopaedic surgeon by a cadaver study. Our test validates the potential of the proposed markerless navigation system for surgical assistance, although further improvement is required for clinical acceptance.Open Acces

    Towards System Agnostic Calibration of Optical See-Through Head-Mounted Displays for Augmented Reality

    Get PDF
    This dissertation examines the developments and progress of spatial calibration procedures for Optical See-Through (OST) Head-Mounted Display (HMD) devices for visual Augmented Reality (AR) applications. Rapid developments in commercial AR systems have created an explosion of OST device options for not only research and industrial purposes, but also the consumer market as well. This expansion in hardware availability is equally matched by a need for intuitive standardized calibration procedures that are not only easily completed by novice users, but which are also readily applicable across the largest range of hardware options. This demand for robust uniform calibration schemes is the driving motive behind the original contributions offered within this work. A review of prior surveys and canonical description for AR and OST display developments is provided before narrowing the contextual scope to the research questions evolving within the calibration domain. Both established and state of the art calibration techniques and their general implementations are explored, along with prior user study assessments and the prevailing evaluation metrics and practices employed within. The original contributions begin with a user study evaluation comparing and contrasting the accuracy and precision of an established manual calibration method against a state of the art semi-automatic technique. This is the first formal evaluation of any non-manual approach and provides insight into the current usability limitations of present techniques and the complexities of next generation methods yet to be solved. The second study investigates the viability of a user-centric approach to OST HMD calibration through novel adaptation of manual calibration to consumer level hardware. Additional contributions describe the development of a complete demonstration application incorporating user-centric methods, a novel strategy for visualizing both calibration results and registration error from the user’s perspective, as well as a robust intuitive presentation style for binocular manual calibration. The final study provides further investigation into the accuracy differences observed between user-centric and environment-centric methodologies. The dissertation concludes with a summarization of the contribution outcomes and their impact on existing AR systems and research endeavors, as well as a short look ahead into future extensions and paths that continued calibration research should explore

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Merging the Real and the Virtual: An Exploration of Interaction Methods to Blend Realities

    Get PDF
    We investigate, build, and design interaction methods to merge the real with the virtual. An initial investigation looks at spatial augmented reality (SAR) and its effects on pointing with a real mobile phone. A study reveals a set of trade-offs between the raycast, viewport, and direct pointing techniques. To further investigate the manipulation of virtual content within a SAR environment, we design an interaction technique that utilizes the distance that a user holds mobile phone away from their body. Our technique enables pushing virtual content from a mobile phone to an external SAR environment, interact with that content, rotate-scale-translate it, and pull the content back into the mobile phone. This is all done in a way that ensures seamless transitions between the real environment of the mobile phone and the virtual SAR environment. To investigate the issues that occur when the physical environment is hidden by a fully immersive virtual reality (VR) HMD, we design and investigate a system that merges a realtime 3D reconstruction of the real world with a virtual environment. This allows users to freely move, manipulate, observe, and communicate with people and objects situated in their physical reality without losing their sense of immersion or presence inside a virtual world. A study with VR users demonstrates the affordances provided by the system and how it can be used to enhance current VR experiences. We then move to AR, to investigate the limitations of optical see-through HMDs and the problem of communicating the internal state of the virtual world with unaugmented users. To address these issues and enable new ways to visualize, manipulate, and share virtual content, we propose a system that combines a wearable SAR projector. Demonstrations showcase ways to utilize the projected and head-mounted displays together, such as expanding field of view, distributing content across depth surfaces, and enabling bystander collaboration. We then turn to videogames to investigate how spectatorship of these virtual environments can be enhanced through expanded video rendering techniques. We extract and combine additional data to form a cumulative 3D representation of the live game environment for spectators, which enables each spectator to individually control a personal view into the stream while in VR. A study shows that users prefer spectating in VR when compared with a comparable desktop rendering

    Serious Games and Mixed Reality Applications for Healthcare

    Get PDF
    Virtual reality (VR) and augmented reality (AR) have long histories in the healthcare sector, offering the opportunity to develop a wide range of tools and applications aimed at improving the quality of care and efficiency of services for professionals and patients alike. The best-known examples of VR–AR applications in the healthcare domain include surgical planning and medical training by means of simulation technologies. Techniques used in surgical simulation have also been applied to cognitive and motor rehabilitation, pain management, and patient and professional education. Serious games are ones in which the main goal is not entertainment, but a crucial purpose, ranging from the acquisition of knowledge to interactive training.These games are attracting growing attention in healthcare because of their several benefits: motivation, interactivity, adaptation to user competence level, flexibility in time, repeatability, and continuous feedback. Recently, healthcare has also become one of the biggest adopters of mixed reality (MR), which merges real and virtual content to generate novel environments, where physical and digital objects not only coexist, but are also capable of interacting with each other in real time, encompassing both VR and AR applications.This Special Issue aims to gather and publish original scientific contributions exploring opportunities and addressing challenges in both the theoretical and applied aspects of VR–AR and MR applications in healthcare

    Virtual Guidance using Mixed Reality in Historical Places and Museums

    Get PDF
    Mixed Reality (MR) is one of the most disruptive technologies that shows potential in many application domains, particularly in the tourism and cultural heritage sector. MR using the latest headsets with the highest capabilities introduces a new visual platform that can change people’s visual experience. This thesis introduces a HoloLens-based mixed reality guidance system for museums and historical places. This new guidance form considers the inclusiveness of the necessary and optimised functionalities, visual and audio guiding abilities, essential roles of a guide, and the related social interactions in the real-time. A mixed reality guide, dubbed ‘MuseumEye’ was designed and developed for the Egyptian Museum in Cairo, to overcome challenges currently facing the museum, e.g. lack of guiding methods, limited information signposted on the exhibits, lack of visitor engagement resulting in less time spent in the museum compared to other museums with similar capacity and significance. These problems motivated the researcher to conduct an exploratory study to investigate the museum environment and guiding methods by interviewing 10 participants and observing 20 visitors. ‘MuseumEye’ was built based on a literature review of immersive systems in museums and the findings of an exploratory study that reveals visitor behaviours and the nature of guidance in the museum. This project increased levels of engagement and the length of time visitors spend in museums, the Egyptian Museum in Cairo in particular, using the mixed reality technology that provides visitors with additional visual, audio information and computer-generated images at various levels of details and via different media. This research introduces the guidelines of designing immersive reality guide applications using the techniques of spatial mapping, designing the multimedia and UI, and designing interactions for exploratory purposes. The main contributions of this study include various theoretical contributions: 1) creating a new form of guidance that enhances the museum experience through developing a mixed reality system; 2) a theoretical framework that assesses mixed reality guidance systems in terms of perceived usefulness, ease of use, enjoyment, interactivity, the roles of a guide and the likelihood of future use; 3) the Ambient Information Visualisation Concept for increasing visitor engagement through better presenting information and enhancing communication and interaction between visitors and exhibits; and a practical contribution in creating a mixed reality guidance system that reshapes the museum space, enhances visitors’ experience and significantly increases the length of time they spend in the museum. The evaluation comprised of quantitative surveys (171 participants and 9 experts) and qualitative observation (51 participants) using MuseumEye in their tours. The results showed positive responses for all measured aspects and compares these to similar studies. The observation results showed that visitors who use MuseumEye spent four times the duration visitors spent without guides or with human guides in front of exhibited items. The quantitative results showed significant correlations between the measured constructs (perceived usefulness, ease of use, enjoyment, multimedia and UI, interactivity) and the likelihood of future use when the roles of guide mediate the relations. Moreover, the ‘perceived guidance’ is the most influential construct on the likelihood of future use of MuseumEye. The results also revealed a high likelihood of future use, which ensures the sustainability of adopting mixed reality technology in museums. This thesis shows the potential of mixed reality guides in the museum sector that reshape the museum space and offers endless possibilities for museums and heritage sites

    Virtual Guidance using Mixed Reality in Historical Places and Museums

    Get PDF
    Mixed Reality (MR) is one of the most disruptive technologies that shows potential in many application domains, particularly in the tourism and cultural heritage sector. MR using the latest headsets with the highest capabilities introduces a new visual platform that can change people’s visual experience. This thesis introduces a HoloLens-based mixed reality guidance system for museums and historical places. This new guidance form considers the inclusiveness of the necessary and optimised functionalities, visual and audio guiding abilities, essential roles of a guide, and the related social interactions in the real-time. A mixed reality guide, dubbed ‘MuseumEye’ was designed and developed for the Egyptian Museum in Cairo, to overcome challenges currently facing the museum, e.g. lack of guiding methods, limited information signposted on the exhibits, lack of visitor engagement resulting in less time spent in the museum compared to other museums with similar capacity and significance. These problems motivated the researcher to conduct an exploratory study to investigate the museum environment and guiding methods by interviewing 10 participants and observing 20 visitors. ‘MuseumEye’ was built based on a literature review of immersive systems in museums and the findings of an exploratory study that reveals visitor behaviours and the nature of guidance in the museum. This project increased levels of engagement and the length of time visitors spend in museums, the Egyptian Museum in Cairo in particular, using the mixed reality technology that provides visitors with additional visual, audio information and computer-generated images at various levels of details and via different media. This research introduces the guidelines of designing immersive reality guide applications using the techniques of spatial mapping, designing the multimedia and UI, and designing interactions for exploratory purposes. The main contributions of this study include various theoretical contributions: 1) creating a new form of guidance that enhances the museum experience through developing a mixed reality system; 2) a theoretical framework that assesses mixed reality guidance systems in terms of perceived usefulness, ease of use, enjoyment, interactivity, the roles of a guide and the likelihood of future use; 3) the Ambient Information Visualisation Concept for increasing visitor engagement through better presenting information and enhancing communication and interaction between visitors and exhibits; and a practical contribution in creating a mixed reality guidance system that reshapes the museum space, enhances visitors’ experience and significantly increases the length of time they spend in the museum. The evaluation comprised of quantitative surveys (171 participants and 9 experts) and qualitative observation (51 participants) using MuseumEye in their tours. The results showed positive responses for all measured aspects and compares these to similar studies. The observation results showed that visitors who use MuseumEye spent four times the duration visitors spent without guides or with human guides in front of exhibited items. The quantitative results showed significant correlations between the measured constructs (perceived usefulness, ease of use, enjoyment, multimedia and UI, interactivity) and the likelihood of future use when the roles of guide mediate the relations. Moreover, the ‘perceived guidance’ is the most influential construct on the likelihood of future use of MuseumEye. The results also revealed a high likelihood of future use, which ensures the sustainability of adopting mixed reality technology in museums. This thesis shows the potential of mixed reality guides in the museum sector that reshape the museum space and offers endless possibilities for museums and heritage sites
    corecore