14 research outputs found

    Augmented Reality Assistance for Surgical Interventions using Optical See-Through Head-Mounted Displays

    Get PDF
    Augmented Reality (AR) offers an interactive user experience via enhancing the real world environment with computer-generated visual cues and other perceptual information. It has been applied to different applications, e.g. manufacturing, entertainment and healthcare, through different AR media. An Optical See-Through Head-Mounted Display (OST-HMD) is a specialized hardware for AR, where the computer-generated graphics can be overlaid directly onto the user's normal vision via optical combiners. Using OST-HMD for surgical intervention has many potential perceptual advantages. As a novel concept, many technical and clinical challenges exist for OST-HMD-based AR to be clinically useful, which motivates the work presented in this thesis. From the technical aspects, we first investigate the display calibration of OST-HMD, which is an indispensable procedure to create accurate AR overlay. We propose various methods to reduce the user-related error, improve robustness of the calibration, and remodel the calibration as a 3D-3D registration problem. Secondly, we devise methods and develop hardware prototype to increase the user's visual acuity of both real and virtual content through OST-HMD, to aid them in tasks that require high visual acuity, e.g. dental procedures. Thirdly, we investigate the occlusion caused by the OST-HMD hardware, which limits the user's peripheral vision. We propose to use alternative indicators to remind the user of unattended environment motion. From the clinical perspective, we identified many clinical use cases where OST-HMD-based AR is potentially helpful, developed applications integrated with current clinical systems, and conducted proof-of-concept evaluations. We first present a "virtual monitor'' for image-guided surgery. It can replace real radiology monitors in the operating room with easier user control and more flexibility in positioning. We evaluated the "virtual monitor'' for simulated percutaneous spine procedures. Secondly, we developed ARssist, an application for the bedside assistant in robotic surgery. The assistant can see the robotic instruments and endoscope within the patient body with ARssist. We evaluated the efficiency, safety and ergonomics of the assistant during two typical tasks: instrument insertion and manipulation. The performance for inexperienced users is significantly improved with ARssist, and for experienced users, the system significantly enhanced their confidence level. Lastly, we developed ARAMIS, which utilizes real-time 3D reconstruction and visualization to aid the laparoscopic surgeon. It demonstrates the concept of "X-ray see-through'' surgery. Our preliminary evaluation validated the application via a peg transfer task, and also showed significant improvement in hand-eye coordination. Overall, we have demonstrated that OST-HMD based AR application provides ergonomic improvements, e.g. hand-eye coordination. In challenging situations or for novice users, the improvements in ergonomic factors lead to improvement in task performance. With continuous effort as a community, optical see-through augmented reality technology will be a useful interventional aid in the near future

    Ultrasound in augmented reality: a mixed-methods evaluation of head-mounted displays in image-guided interventions

    Get PDF
    Purpose: Augmented reality (AR) and head-mounted displays (HMD) in medical practice are current research topics. A commonly proposed use case of AR-HMDs is to display data in image-guided interventions. Although technical feasibility has been thoroughly shown, effects of AR-HMDs on interventions are not yet well researched, hampering clinical applicability. Therefore, the goal of this study is to better understand the benefits and limitations of this technology in ultrasound-guided interventions. Methods: We used an AR-HMD system (based on the first-generation Microsoft Hololens) which overlays live ultrasound images spatially correctly at the location of the ultrasound transducer. We chose ultrasound-guided needle placements as a representative task for image-guided interventions. To examine the effects of the AR-HMD, we used mixed methods and conducted two studies in a lab setting: (1) In a randomized crossover study, we asked participants to place needles into a training model and evaluated task duration and accuracy with the AR-HMD as compared to the standard procedure without visual overlay and (2) in a qualitative study, we analyzed the user experience with AR-HMD using think-aloud protocols during ultrasound examinations and semi-structured interviews after the task. Results: Participants (n = 20) placed needles more accurately (mean error of 7.4 mm vs. 4.9 mm, p = 0.022) but not significantly faster (mean task duration of 74.4 s vs. 66.4 s, p = 0.211) with the AR-HMD. All participants in the qualitative study (n = 6) reported limitations of and unfamiliarity with the AR-HMD, yet all but one also clearly noted benefits and/or that they would like to test the technology in practice. Conclusion: We present additional, though still preliminary, evidence that AR-HMDs provide benefits in image-guided procedures. Our data also contribute insights into potential causes underlying the benefits, such as improved spatial perception. Still, more comprehensive studies are needed to ascertain benefits for clinical applications and to clarify mechanisms underlying these benefits

    Conception et développement de composants logiciels et matériels pour un dispositif ophtalmique

    Get PDF
    Les recherches menées au cours de cette thèse de Doctorat s'inscrivent dans les activités du laboratoire commun OPERA (OPtique EmbaRquée Active) impliquant ESSILOR-LUXOTTICA et le CNRS. L'objectif est de contribuer au développement des "lunettes du futur" intégrant des fonctions d'obscurcissement, de focalisation ou d'affichage qui s'adaptent en permanence à la scène et au regard de l'utilisateur. Ces nouveaux dispositifs devront être dotés de capacités de perception, de décision et d'action, et devront respecter des contraintes d'encombrement, de poids, de consommation énergétique et de temps de traitement. Ils présentent par conséquent des connexions évidentes avec la robotique. Dans ce contexte, les recherches ont consisté à investiguer la structure et la construction de tels systèmes afin d'identifier leurs enjeux et difficultés. Pour ce faire, la première tâche a été de mettre en place des émulateurs de divers types de lunettes actives, qui permettent de prototyper et d'évaluer efficacement diverses fonctions. Dans cette phase de prototypage et de test, ces émulateurs s'appuient naturellement sur une architecture logicielle modulaire typique de la robotique. La seconde partie de la thèse s'est focalisée sur le prototypage d'un composant clé des lunettes du futur, qui implique une contrainte supplémentaire de basse consommation : le système de suivi du regard, aussi appelé oculomètre. Le principe d'un assemblage de photodiodes et d'un traitement par réseau de neurones a été proposé. Un simulateur a été mis au point, ainsi qu'une étude de l'influence de l'agencement des photodiodes et de l'hyper-paramétrisation du réseau sur les performances de l'oculomètre.The research carried out during this doctoral thesis takes place within the OPERA joint laboratory (OPtique EmbaRquée Active) involving ESSILOR-LUXOTTICA and the CNRS. The aim is to contribute to the development of "glasses of the future", which feature obscuration, focus or display capabilities that continuously adapt to the scene and the user gaze. These new devices will be endowed with perception, decision and action capabilities, and will have to respect constraints of space, weight, energy consumption and processing time. They therefore show obvious connections with robotics. In this context, the structure and building of such systems has been investigated in order to identify their issues and difficulties. To that end, the first task was to set up emulators of various types of active glasses, which enable the prototyping and effective testing of various functions. In this prototyping and testing phase, these emulators naturally rely on a modular software architecture typical of robotics. The second part of the thesis focused on the prototyping of a key component which implies an additional constraint on low consumption, namely the eye tracking system, also known as gaze tracker. The principle of a photodiode assembly and of a neural network processing has been proposed. A simulator has been developed, as well as a study of the influence of the arrangement of photodiodes and the hyper-parametrization of the network on the performance of the oculometer

    Augmented reality fonts with enhanced out-of-focus text legibility

    Get PDF
    In augmented reality, information is often distributed between real and virtual contexts, and often appears at different distances from the viewer. This raises the issues of (1) context switching, when attention is switched between real and virtual contexts, (2) focal distance switching, when the eye accommodates to see information in sharp focus at a new distance, and (3) transient focal blur, when information is seen out of focus, during the time interval of focal distance switching. This dissertation research has quantified the impact of context switching, focal distance switching, and transient focal blur on human performance and eye fatigue in both monocular and binocular viewing conditions. Further, this research has developed a novel font that when seen out-of-focus looks sharper than standard fonts. This SharpView font promises to mitigate the effect of transient focal blur. Developing this font has required (1) mathematically modeling out-of-focus blur with Zernike polynomials, which model focal deficiencies of human vision, (2) developing a focus correction algorithm based on total variation optimization, which corrects out-of-focus blur, and (3) developing a novel algorithm for measuring font sharpness. Finally, this research has validated these fonts through simulation and optical camera-based measurement. This validation has shown that, when seen out of focus, SharpView fonts are as much as 40 to 50% sharper than standard fonts. This promises to improve font legibility in many applications of augmented reality

    User-centered Virtual Environment Assessment And Design For Cognitive Rehabilitation Applications

    Get PDF
    Virtual environment (VE) design for cognitive rehabilitation necessitates a new methodology to ensure the validity of the resulting rehabilitation assessment. We propose that benchmarking the VE system technology utilizing a user-centered approach should precede the VE construction. Further, user performance baselines should be measured throughout testing as a control for adaptive effects that may confound the metrics chosen to evaluate the rehabilitation treatment. To support these claims we present data obtained from two modules of a user-centered head-mounted display (HMD) assessment battery, specifically resolution visual acuity and stereoacuity. Resolution visual acuity and stereoacuity assessments provide information about the image quality achieved by an HMD based upon its unique system parameters. When applying a user-centered approach, we were able to quantify limitations in the VE system components (e.g., low microdisplay resolution) and separately point to user characteristics (e.g., changes in dark focus) that may introduce error in the evaluation of VE based rehabilitation protocols. Based on these results, we provide guidelines for calibrating and benchmarking HMDs. In addition, we discuss potential extensions of the assessment to address higher level usability issues. We intend to test the proposed framework within the Human Experience Modeler (HEM), a testbed created at the University of Central Florida to evaluate technologies that may enhance cognitive rehabilitation effectiveness. Preliminary results of a feasibility pilot study conducted with a memory impaired participant showed that the HEM provides the control and repeatability needed to conduct such technology comparisons. Further, the HEM affords the opportunity to integrate new brain imaging technologies (i.e., functional Near Infrared Imaging) to evaluate brain plasticity associated with VE based cognitive rehabilitation
    corecore