86 research outputs found

    The role of camera convergence in stereoscopic video see-through augmented reality displays

    Get PDF
    In the realm of wearable augmented reality (AR) systems, stereoscopic video see-through displays raise issues related to the user's perception of the three-dimensional space. This paper seeks to put forward few considerations regarding the perceptual artefacts common to standard stereoscopic video see-through displays with fixed camera convergence. Among the possible perceptual artefacts, the most significant one relates to diplopia arising from reduced stereo overlaps and too large screen disparities. Two state-of-the-art solutions are reviewed. The first one suggests a dynamic change, via software, of the virtual camera convergence, whereas the second one suggests a matched hardware/software solution based on a series of predefined focus/vergence configurations. Potentialities and limits of both the solutions are outlined so as to provide the AR community, a yardstick for developing new stereoscopic video see-through systems suitable for different working distances

    Perspective Preserving Solution for Quasi-Orthoscopic Video See-Through HMDs

    Get PDF
    In non-orthoscopic video see-through (VST) head-mounted displays (HMDs), depth perception through stereopsis is adversely affected by sources of spatial perception errors. Solutions for parallax-free and orthoscopic VST HMDs were considered to ensure proper space perception but at expenses of an increased bulkiness and weight. In this work, we present a hybrid video-optical see-through HMD the geometry of which explicitly violates the rigorous conditions of orthostereoscopy. For properly recovering natural stereo fusion of the scene within the personal space in a region around a predefined distance from the observer, we partially resolve the eye-camera parallax by warping the camera images through a perspective preserving homography that accounts for the geometry of the VST HMD and refers to such distance. For validating our solution; we conducted objective and subjective tests. The goal of the tests was to assess the efficacy of our solution in recovering natural depth perception in the space around said reference distance. The results obtained showed that the quasi-orthoscopic setting of the HMD; together with the perspective preserving image warping; allow the recovering of a correct perception of the relative depths. The perceived distortion of space around the reference plane proved to be not as severe as predicted by the mathematical models

    Software Framework for Customized Augmented Reality Headsets in Medicine

    Get PDF
    The growing availability of self-contained and affordable augmented reality headsets such as the Microsoft HoloLens is encouraging the adoption of these devices also in the healthcare sector. However, technological and human-factor limitations still hinder their routine use in clinical practice. Among them, the major drawbacks are due to their general-purpose nature and to the lack of a standardized framework suited for medical applications and devoid of platform-dependent tracking techniques and/or complex calibration procedures. To overcome such limitations, in this paper we present a software framework that is designed to support the development of augmented reality applications for custom-made head-mounted displays designed to aid high-precision manual tasks. The software platform is highly configurable, computationally efficient, and it allows the deployment of augmented reality applications capable to support in situ visualization of medical imaging data. The framework can provide both optical and video see-through-based augmentations and it features a robust optical tracking algorithm. An experimental study was designed to assess the efficacy of the platform in guiding a simulated task of surgical incision. In the experiments, the user was asked to perform a digital incision task, with and without the aid of the augmented reality headset. The task accuracy was evaluated by measuring the similarity between the traced curve and the planned one. The average error in the augmented reality tests was < 1 mm. The results confirm that the proposed framework coupled with the new-concept headset may boost the integration of augmented reality headsets into routine clinical practice

    Optical See-Through Head-Mounted Displays With Short Focal Distance: Conditions for Mitigating Parallax-Related Registration Error

    Get PDF
    Optical see-through (OST) augmented reality head-mounted displays are quickly emerging as a key asset in several application fields but their ability to profitably assist high precision activities in the peripersonal space is still sub-optimal due to the calibration procedure required to properly model the user's viewpoint through the see-through display. In this work, we demonstrate the beneficial impact, on the parallax-related AR misregistration, of the use of optical see-through displays whose optical engines collimate the computer-generated image at a depth close to the fixation point of the user in the peripersonal space. To estimate the projection parameters of the OST display for a generic viewpoint position, our strategy relies on a dedicated parameterization of the virtual rendering camera based on a calibration routine that exploits photogrammetry techniques. We model the registration error due to the viewpoint shift and we validate it on an OST display with short focal distance. The results of the tests demonstrate that with our strategy the parallax-related registration error is submillimetric provided that the scene under observation stays within a suitable view volume that falls in a ±10 cm depth range around the focal plane of the display. This finding will pave the way to the development of new multi-focal models of OST HMDs specifically conceived to aid high-precision manual tasks in the peripersonal space

    Projected Augmented Reality to Drive Osteotomy Surgery: Implementation and Comparison With Video See-Through Technology

    Get PDF
    In recent years, the spreading of visual augmented reality as an effective tool in image-guided surgery, has stimulated the research community to investigate the use of commercial augmented reality headsets a broad range of potential applications. This aroused enthusiasm among clinicians for the potential of augmented reality, but also revealed some technological and human-factor limitations that still hinder its routine adoption in the operating room. In this work, we propose an alternative to head-mounted displays, based on projected augmented reality. Projected augmented reality completely preserves the surgeon’s natural view of the operating field, because it requires no perspective conversion and/or optical mediation. We selected a cranio-maxillofacial surgery application as a benchmark to test the proposed system and compare its accuracy with the one obtained with a video see-through system. The augmented reality overlay accuracy was evaluated by measuring the distance between a virtual osteotomy line and its real counterpart. The experimental tests showed that the accuracy of the two augmented reality modes is similar, with a median error discrepancy of about 0.3 mm for the projected augmented reality mode. Results suggest that projected augmented reality can be a valuable alternative to standard see-through head-mounted displays to support in-situ visualization of medical imaging data as surgical guidance

    Off-Line Camera-Based Calibration for Optical See-Through Head-Mounted Displays

    Get PDF
    In recent years, the entry into the market of self contained optical see-through headsets with integrated multi-sensor capabilities has led the way to innovative and technology driven augmented reality applications and has encouraged the adoption of these devices also across highly challenging medical and industrial settings. Despite this, the display calibration process of consumer level systems is still sub-optimal, particularly for those applications that require high accuracy in the spatial alignment between computer generated elements and a real-world scene. State-of-the-art manual and automated calibration procedures designed to estimate all the projection parameters are too complex for real application cases outside laboratory environments. This paper describes an off-line fast calibration procedure that only requires a camera to observe a planar pattern displayed on the see-through display. The camera that replaces the user’s eye must be placed within the eye-motion-box of the see-through display. The method exploits standard camera calibration and computer vision techniques to estimate the projection parameters of the display model for a generic position of the camera. At execution time, the projection parameters can then be refined through a planar homography that encapsulates the shift and scaling effect associated with the estimated relative translation from the old camera position to the current user’s eye position. Compared to classical SPAAM techniques that still rely on the human element and to other camera based calibration procedures, the proposed technique is flexible and easy to replicate in both laboratory environments and real-world settings

    Hybrid Video/Optical See-Through HMD

    Get PDF
    An old but still ongoing subject of debate among augmented reality (AR) experts is about which see-through paradigm is best in wearable AR displays. Video see-through (VST) and optical see-through (OST) paradigms have both their own strengths and shortcomings with respect to technological and human-factor aspects. The major difference between these see-through paradigms is in providing an aided (VST) or unaided (OST) view of the real world. In this work, we present a novel approach for the development of AR stereoscopic head-mounted displays (HMDs) that can provide both the see-through mechanisms. Our idea is to dynamically modify the transparency of the display through a liquid crystal (LC)-based electro-optical shutter applied on the top of a standard OST device opportunely modified for housing a pair of external cameras. A plane-induced homography transformation is used for consistently warping the video images, hence reducing the parallax between cameras and displays. An externally applied drive voltage is used for smoothly controlling the light transmittance of the LC shutters so as to allow an easy transition between the unaided and the camera-mediated view of the real scene. Our tests have proven the efficacy of the proposed solution under worst-case lighting conditions

    Augmented reality in open surgery

    Get PDF
    Augmented reality (AR) has been successfully providing surgeons an extensive visual information of surgical anatomy to assist them throughout the procedure. AR allows surgeons to view surgical field through the superimposed 3D virtual model of anatomical details. However, open surgery presents new challenges. This study provides a comprehensive overview of the available literature regarding the use of AR in open surgery, both in clinical and simulated settings. In this way, we aim to analyze the current trends and solutions to help developers and end/users discuss and understand benefits and shortcomings of these systems in open surgery. We performed a PubMed search of the available literature updated to January 2018 using the terms (1) “augmented reality” AND “open surgery”, (2) “augmented reality” AND “surgery” NOT “laparoscopic” NOT “laparoscope” NOT “robotic”, (3) “mixed reality” AND “open surgery”, (4) “mixed reality” AND “surgery” NOT “laparoscopic” NOT “laparoscope” NOT “robotic”. The aspects evaluated were the following: real data source, virtual data source, visualization processing modality, tracking modality, registration technique, and AR display type. The initial search yielded 502 studies. After removing the duplicates and by reading abstracts, a total of 13 relevant studies were chosen. In 1 out of 13 studies, in vitro experiments were performed, while the rest of the studies were carried out in a clinical setting including pancreatic, hepatobiliary, and urogenital surgeries. AR system in open surgery appears as a versatile and reliable tool in the operating room. However, some technological limitations need to be addressed before implementing it into the routine practice

    Augmented Reality-Assisted Craniotomy for Parasagittal and Convexity En Plaque Meningiomas and Custom-Made Cranio-Plasty: A Preliminary Laboratory Report

    Get PDF
    Background: This report discusses the utility of a wearable augmented reality platform in neurosurgery for parasagittal and convexity en plaque meningiomas with bone flap removal and custom-made cranioplasty. Methods: A real patient with en plaque cranial vault meningioma with diffuse and extensive dural involvement, extracranial extension into the calvarium, and homogeneous contrast enhancement on gadolinium-enhanced T1-weighted MRI, was selected for this case study. A patient-specific manikin was designed starting with the segmentation of the patient’s preoperative MRI images to simulate a craniotomy procedure. Surgical planning was performed according to the segmented anatomy, and customized bone flaps were designed accordingly. During the surgical simulation stage, the VOSTARS head-mounted display was used to accurately display the planned craniotomy trajectory over the manikin skull. The precision of the craniotomy was assessed based on the evaluation of previously prepared custom-made bone flaps. Results: A bone flap with a radius 0.5 mm smaller than the radius of an ideal craniotomy fitted perfectly over the performed craniotomy, demonstrating an error of less than ±1 mm in the task execution. The results of this laboratory-based experiment suggest that the proposed augmented reality platform helps in simulating convexity en plaque meningioma resection and custom-made cranioplasty, as carefully planned in the preoperative phase. Conclusions: Augmented reality head-mounted displays have the potential to be a useful adjunct in tumor surgical resection, cranial vault lesion craniotomy and also skull base surgery, but more study with large series is needed

    Robust laparoscopic instruments tracking using colored strips

    Get PDF
    To assist surgeons in the acquisition of the required skills for the proper execution of the laparoscopic procedure, surgical simulators are used. During training with simulators it is useful to provide a surgical performance quantitative evaluation. Recent research works showed that such evaluation can be obtained by tracking the laparoscopic instruments, using only the images provided by the laparoscope and without hindering the surgical scene. In this work the state of the art method is improved so that a robust tracking can run even with the noisy background provided by realistic simulators. The method was validated by comparison with the tracking of a â\u80\u9cchess-boardâ\u80\u9d pattern and following tests were performed to check the robustness of the developed algorithm. Despite the noisy environment, the implemented method was found to be able to track the tip of the surgical instrument with a good accuracy compared to the other studies in the literature
    • …
    corecore