562 research outputs found

    Software Framework for Customized Augmented Reality Headsets in Medicine

    Get PDF
    The growing availability of self-contained and affordable augmented reality headsets such as the Microsoft HoloLens is encouraging the adoption of these devices also in the healthcare sector. However, technological and human-factor limitations still hinder their routine use in clinical practice. Among them, the major drawbacks are due to their general-purpose nature and to the lack of a standardized framework suited for medical applications and devoid of platform-dependent tracking techniques and/or complex calibration procedures. To overcome such limitations, in this paper we present a software framework that is designed to support the development of augmented reality applications for custom-made head-mounted displays designed to aid high-precision manual tasks. The software platform is highly configurable, computationally efficient, and it allows the deployment of augmented reality applications capable to support in situ visualization of medical imaging data. The framework can provide both optical and video see-through-based augmentations and it features a robust optical tracking algorithm. An experimental study was designed to assess the efficacy of the platform in guiding a simulated task of surgical incision. In the experiments, the user was asked to perform a digital incision task, with and without the aid of the augmented reality headset. The task accuracy was evaluated by measuring the similarity between the traced curve and the planned one. The average error in the augmented reality tests was < 1 mm. The results confirm that the proposed framework coupled with the new-concept headset may boost the integration of augmented reality headsets into routine clinical practice

    Key Ergonomics Requirements and Possible Mechanical Solutions for Augmented Reality Head-Mounted Displays in Surgery

    Get PDF
    In the context of a European project, we identified over 150 requirements for the development of an augmented reality (AR) head-mounted display (HMD) specifically tailored to support highly challenging manual surgical procedures. The requirements were established by surgeons from different specialties and by industrial players working in the surgical field who had strong commitments to the exploitation of this technology. Some of these requirements were specific to the project, while others can be seen as key requirements for the implementation of an efficient and reliable AR headset to be used to support manual activities in the peripersonal space. The aim of this work is to describe these ergonomic requirements that impact the mechanical design of the HMDs, the possible innovative solutions to these requirements, and how these solutions have been used to implement the AR headset in surgical navigation. We also report the results of a preliminary qualitative evaluation of the AR headset by three surgeons

    Wearable augmented reality platform for aiding complex 3D trajectory tracing

    Get PDF
    Augmented reality (AR) Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support manual tasks performed under direct vision. Despite that, technological and human-factor limitations still hinder their routine use for aiding high-precision manual tasks in the peripersonal space. To overcome such limitations, in this work, we show the results of a user study aimed to validate qualitatively and quantitatively a recently developed AR platform specifically conceived for guiding complex 3D trajectory tracing tasks. The AR platform comprises a new-concept AR video see-through (VST) HMD and a dedicated software framework for the effective deployment of the AR application. In the experiments, the subjects were asked to perform 3D trajectory tracing tasks on 3D-printed replica of planar structures or more elaborated bony anatomies. The accuracy of the trajectories traced by the subjects was evaluated by using templates designed ad hoc to match the surface of the phantoms. The quantitative results suggest that the AR platform could be used to guide high-precision tasks: on average more than 94% of the traced trajectories stayed within an error margin lower than 1 mm. The results confirm that the proposed AR platform will boost the profitable adoption of AR HMDs to guide high precision manual tasks in the peripersonal space

    Review on Augmented Reality in Oral and Cranio-Maxillofacial Surgery: Toward 'Surgery-Specific' Head-Up Displays

    Get PDF
    In recent years, there has been an increasing interest towards the augmented reality as applied to the surgical field. We conducted a systematic review of literature classifying the augmented reality applications in oral and cranio-maxillofacial surgery (OCMS) in order to pave the way to future solutions that may ease the adoption of AR guidance in surgical practice. Publications containing the terms 'augmented reality' AND 'maxillofacial surgery', and the terms 'augmented reality' AND 'oral surgery' were searched in the PubMed database. Through the selected studies, we performed a preliminary breakdown according to general aspects, such as surgical subspecialty, year of publication and country of research; then, a more specific breakdown was provided according to technical features of AR-based devices, such as virtual data source, visualization processing mode, tracking mode, registration technique and AR display type. The systematic search identified 30 eligible publications. Most studies (14) were in orthognatic surgery, the minority (2) concerned traumatology, while 6 studies were in oncology and 8 in general OCMS. In 8 of 30 studies the AR systems were based on a head-mounted approach using smart glasses or headsets. In most of these cases (7), a video-see-through mode was implemented, while only 1 study described an optical-see-through mode. In the remaining 22 studies, the AR content was displayed on 2D displays (10), full-parallax 3D displays (6) and projectors (5). In 1 case the AR display type is not specified. AR applications are of increasing interest and adoption in oral and cranio-maxillofacial surgery, however, the quality of the AR experience represents the key requisite for a successful result. Widespread use of AR systems in the operating room may be encouraged by the availability of 'surgery-specific' head-mounted devices that should guarantee the accuracy required for surgical tasks and the optimal ergonomics

    A Testing and Experimenting Environment for Microscopic Traffic Simulation Utilizing Virtual Reality and Augmented Reality

    Get PDF
    Microscopic traffic simulation (MTS) is the emulation of real-world traffic movements in a virtual environment with various traffic entities. Typically, the movements of the vehicles in MTS follow some predefined algorithms, e.g., car-following models, lane changing models, etc. Moreover, existing MTS models only provide a limited capability of two- and/or three-dimensional displays that often restrict the user’s viewpoint to a flat screen. Their downscaled scenes neither provide a realistic representation of the environment nor allow different users to simultaneously experience or interact with the simulation model from different perspectives. These limitations neither allow the traffic engineers to effectively disseminate their ideas to various stakeholders of different backgrounds nor allow the analysts to have realistic data about the vehicle or pedestrian movements. This dissertation intends to alleviate those issues by creating a framework and a prototype for a testing environment where MTS can have inputs from user-controlled vehicles and pedestrians to improve their traffic entity movement algorithms as well as have an immersive M3 (multi-mode, multi-perspective, multi-user) visualization of the simulation using Virtual Reality (VR) and Augmented Reality (AR) technologies. VR environments are created using highly realistic 3D models and environments. With modern game engines and hardware available on the market, these VR applications can provide a highly realistic and immersive experience for a user. Different experiments performed by real users in this study prove that utilizing VR technology for different traffic related experiments generated much more favorable results than the traditional displays. Moreover, using AR technologies for pedestrian studies is a novel approach that allows a user to walk in the real world and the simulation world at a one-to-one scale. This capability opens a whole new avenue of user experiment possibilities. On top of that, the in-environment communication chat system will allow researchers to perform different Advanced Driver Assistance System (ADAS) studies without ever needing to leave the simulation environment. Last but not least, the distributed nature of the framework enables users to participate from different geographic locations with their choice of display device (desktop, smartphone, VR, or AR). The prototype developed for this dissertation is readily available on a test webpage, and a user can easily download the prototype application without needing to install anything. The user also can run the remote MTS server and then connect their client application to the server

    Spatial Augmented Reality Using Structured Light Illumination

    Get PDF
    Spatial augmented reality is a particular kind of augmented reality technique that uses projector to blend the real objects with virtual contents. Coincidentally, as a means of 3D shape measurement, structured light illumination makes use of projector as part of its system as well. It uses the projector to generate important clues to establish the correspondence between the 2D image coordinate system and the 3D world coordinate system. So it is appealing to build a system that can carry out the functionalities of both spatial augmented reality and structured light illumination. In this dissertation, we present all the hardware platforms we developed and their related applications in spatial augmented reality and structured light illumination. Firstly, it is a dual-projector structured light 3D scanning system that has two synchronized projectors operate simultaneously, consequently it outperforms the traditional structured light 3D scanning system which only include one projector in terms of the quality of 3D reconstructions. Secondly, we introduce a modified dual-projector structured light 3D scanning system aiming at detecting and solving the multi-path interference. Thirdly, we propose an augmented reality face paint system which detects human face in a scene and paints the face with any favorite colors by projection. Additionally, the system incorporates a second camera to realize the 3D space position tracking by exploiting the principle of structured light illumination. At last, a structured light 3D scanning system with its own built-in machine vision camera is presented as the future work. So far the standalone camera has been completed from the a bare CMOS sensor. With this customized camera, we can achieve high dynamic range imaging and better synchronization between the camera and projector. But the full-blown system that includes HDMI transmitter, structured light pattern generator and synchronization logic has yet to be done due to the lack of a well designed high speed PCB

    Inclusion in Virtual Reality Technology: A Scoping Review

    Full text link
    Despite the significant growth in virtual reality applications and research, the notion of inclusion in virtual reality is not well studied. Inclusion refers to the active involvement of different groups of people in the adoption, use, design, and development of VR technology and applications. In this review, we provide a scoping analysis of existing virtual reality research literature about inclusion. We categorize the literature based on target group into ability, gender, and age, followed by those that study community-based design of VR experiences. In the latter group, we focus mainly on Indigenous Peoples as a clearer and more important example. We also briefly review the approaches to model and consider the role of users in technology adoption and design as a background for inclusion studies. We identify a series of generic barriers and research gaps and some specific ones for each group, resulting in suggested directions for future research

    Enhancing Pilot Training Through Virtual Reality: Recognizing and Mitigating Aviation Visual and Vestibular Illusions

    Get PDF
    Aviation illusions, arising from sensory misinterpretations, can lead to critical pilot errors. The study aims to evaluate VR training\u27s efficacy in recognizing and managing these illusions. Embry-Riddle Aeronautical University (ERAU) subject matter experts and the Extended Reality Lab developed the Virtual Reality Aviation Illusion Trainer (VRAIT) software program to provide users a complete VR experience and training on visual and vestibular illusions. This study investigated the effectiveness of integrating virtual reality (VR) technology in pilot training, focusing on the VRAIT motion-based visual and vestibular illusion training. Conducted with participants from Embry-Riddle Aeronautical University, the research assesses pre-training and post-training knowledge scores and self-efficacy. Motion-based VR training significantly improved knowledge and self-efficacy scores. Pre-training knowledge scores (M = 64.36, SD = 12.71) increase to post-training scores (M = 79.41, SD = 15.02), indicating significant knowledge enhancement (t(214) = -12.433, p \u3c .001). Similarly, pre-training self-efficacy scores (M = 5.50, SD = 2.01) significantly increased to post-training scores (M = 8.31, SD = 1.55), highlighting self-efficacy improvements (t(214) = -17.712, p \u3c .001). Participants experienced minimal simulator sickness, suggesting a well-tolerated training duration and sequence. Additionally, participants reported a high level of enjoyment and technological satisfaction with the training. The study contributes to VR training methodologies, emphasizing the potential of motion-based VR training to enhance aviation education. This research demonstrated that motion-based VR training effectively enhanced pilot knowledge and self-efficacy in recognizing and managing aviation illusions. The findings underscore VR\u27s potential in enhancing visual and vestibular illusion training outcomes

    Revisiting the city, augmented with digital technologies: The SeeARch tool

    Get PDF
    This paper presents a digital tool that enables the city inhabitants or visitors to acquire in real time, relevant information about several aspects of a given city, augmenting their real experience of the place. We used digital technologies, particularly an in-house developed Augmented Reality (AR) tool referred to as SeeARch,that augments in situ the city explo-ration experience. This tablet-based AR tool, enables mobile users to recognize the facade of specific buildings and, in real-time, superimpose relevant associated 3D and multimedia information, while visiting the city. The aim of this app is to provide customized infor-mation to visitors suiting their own interests and time to visit the city. User satisfaction evaluation tests were performed with a sample of the potential users. The outcomes of such user studies, showed that participants considered that our approach delivers a more detailed knowledge about the city, and is more informative, when compared with the nor-mal sight-seeing visits in the city, performed by the same participants.info:eu-repo/semantics/acceptedVersio

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe
    • …
    corecore