4 research outputs found

    A comparison of guiding techniques for out-of-view objects in full-coverage displays

    Get PDF
    Full-coverage displays can place visual content anywhere on the interior surfaces of a room (e.g., a weather display near the coat stand). In these settings, digital artefacts can be located behind the user and out of their field of view - meaning that it can be difficult to notify the user when these artefacts need attention. Although much research has been carried out on notification, little is known about how best to direct people to the necessary location in room environments. We designed five diverse attention-guiding techniques for full-coverage display rooms, and evaluated them in a study where participants completed search tasks guided by the different techniques. Our study provides new results about notification in full-coverage displays: we showed benefits of persistent visualisations that could be followed all the way to the target and that indicate distance-to-target. Our findings provide useful information for improving the usability of interactive full-coverage environments.PostprintPostprin

    Augmented Reality Assistance for Surgical Interventions using Optical See-Through Head-Mounted Displays

    Get PDF
    Augmented Reality (AR) offers an interactive user experience via enhancing the real world environment with computer-generated visual cues and other perceptual information. It has been applied to different applications, e.g. manufacturing, entertainment and healthcare, through different AR media. An Optical See-Through Head-Mounted Display (OST-HMD) is a specialized hardware for AR, where the computer-generated graphics can be overlaid directly onto the user's normal vision via optical combiners. Using OST-HMD for surgical intervention has many potential perceptual advantages. As a novel concept, many technical and clinical challenges exist for OST-HMD-based AR to be clinically useful, which motivates the work presented in this thesis. From the technical aspects, we first investigate the display calibration of OST-HMD, which is an indispensable procedure to create accurate AR overlay. We propose various methods to reduce the user-related error, improve robustness of the calibration, and remodel the calibration as a 3D-3D registration problem. Secondly, we devise methods and develop hardware prototype to increase the user's visual acuity of both real and virtual content through OST-HMD, to aid them in tasks that require high visual acuity, e.g. dental procedures. Thirdly, we investigate the occlusion caused by the OST-HMD hardware, which limits the user's peripheral vision. We propose to use alternative indicators to remind the user of unattended environment motion. From the clinical perspective, we identified many clinical use cases where OST-HMD-based AR is potentially helpful, developed applications integrated with current clinical systems, and conducted proof-of-concept evaluations. We first present a "virtual monitor'' for image-guided surgery. It can replace real radiology monitors in the operating room with easier user control and more flexibility in positioning. We evaluated the "virtual monitor'' for simulated percutaneous spine procedures. Secondly, we developed ARssist, an application for the bedside assistant in robotic surgery. The assistant can see the robotic instruments and endoscope within the patient body with ARssist. We evaluated the efficiency, safety and ergonomics of the assistant during two typical tasks: instrument insertion and manipulation. The performance for inexperienced users is significantly improved with ARssist, and for experienced users, the system significantly enhanced their confidence level. Lastly, we developed ARAMIS, which utilizes real-time 3D reconstruction and visualization to aid the laparoscopic surgeon. It demonstrates the concept of "X-ray see-through'' surgery. Our preliminary evaluation validated the application via a peg transfer task, and also showed significant improvement in hand-eye coordination. Overall, we have demonstrated that OST-HMD based AR application provides ergonomic improvements, e.g. hand-eye coordination. In challenging situations or for novice users, the improvements in ergonomic factors lead to improvement in task performance. With continuous effort as a community, optical see-through augmented reality technology will be a useful interventional aid in the near future

    Couplage de techniques d'interaction avancées avec des environnements virtuels 3D interactifs

    Get PDF
    The work of this thesis fit on the boundary between two complementary research areas: the field of 3D Virtual Environment (3DVE) from Computer Graphics (CG) and Virtual Reality (RV) and the field of Human-Computer Interaction (HCI). They rely on three assessments. Firstly, we observe that 3DVE takes more importance in our daily life (video games, serious games, e-commerce, museums, through the web and on mobile devices). Secondly, HCI becomes more complex with the emergence of advance forms of interaction like ambient computing, tangible interaction or spatial and gestural interactions. This evolution goes along with a diversification of devices (3D mouse, the Wiimote, the Kinect or the Leap Motion). Thirdly, the design of interaction techniques with 3DVE brings up some different considerations taken into account by the communities in the field of 3DVE and HCI. Therefore, take advantage of the latest considerations of EV3D communities (metaphors, quality of 3D interaction) and HCI (advance forms of interaction) results in the need to develop the coupling between advance forms of interaction techniques and EV3D. In this context, the objective of this thesis work is to contribute to the development of interactive 3D environments in multiple situations, including large audience situations. The approach we developed aimed to create a bridge between 3D and HCI design considerations. We intend to improve the coupling of advance interaction techniques with interactive 3D virtual environment. After analyzing methods for the design of interaction techniques for 3DVE, a first contribution consists in a design framework of 3D interaction. This framework aggregates design issues stem from 3D and HCI and help the designer to identify several elements involve in the coupling of interaction with a 3DVE. This design framework is based on the analysis of the links between user tasks and elements of the 3DVE impacted by these tasks. In order to precisely characterize each link, we have introduced the 3DIM (3D Interaction Modality) notation that describes the characteristics of the different elements constituting a "3D Interaction Modality" for the accomplishment of a user's interaction task in a 3DVE. We have grouped these elements into six blocks: the user, the physical actions, the physical objects, the input devices, the 3D behaviors and the 3D interactive objects. We complete our framework with analytical properties for guiding the designer and provide descriptive, evaluative and generative power at our conceptual model of advanced interaction techniques for 3DVE. Collaborating with the Museum of "Le Pic du Midi" observatory in France, we used our framework to design and implement tangible interaction and technique based on smartphone usage. Museum visitors can use these techniques in a 3DVE of the "Telescope Bernard Lyot" to explore and understand its functioning. We have conducted three users' studies in order to explore the design space of using a smartphone to interact with 3DVE. We used the smartphone in different ways to navigate, select and manipulate a 3D object displayed on a large remote screen. We explored several design solutions with a smartphone as a touch device, as a tangible object or mid-air interaction around the device.Les travaux de cette thèse s'inscrivent à la frontière entre deux domaines de recherche complémentaires : le domaine des Environnements Virtuels 3D (EV3D) issus de l'Informatique Graphique (IG) et de la Réalité Virtuelle (RV) et le domaine de l'Interaction Homme-Machine (IHM). Ils s'appuient sur trois constats. D'une part, on observe une place grandissante des EV3D dans notre quotidien (jeux vidéo, jeux sérieux, e-commerce, dans les musées, à travers le web et sur les dispositifs mobiles). D'autre part, les IHM se complexifient notamment avec l'apparition de formes avancées d'interaction comme l'informatique ambiante, l'interaction tangible, ou encore l'interaction spatiale et gestuelle, et s'accompagne d'une diversification des dispositifs d'interaction (souris 3D, la Wiimote, la Kinect, le Leap Motion). Enfin, la conception de techniques d'interaction avancées avec des EV3D fait apparaitre des considérations différentes prises en compte par les communautés des domaines EV3D et IHM. Par conséquent, tirer profit des considérations les plus récentes des communautés EV3D (métaphores, qualité de l'interaction 3D) et IHM (formes avancées de technique d'interaction) se traduit par un besoin de développer le couplage entre formes avancées de technique d'interaction et EV3D. Dans ce contexte, l'objectif de ces travaux de thèse est de contribuer à l'essor des environnements 3D interactifs dans de multiples situations, et notamment des situations grand public, en adoptant une approche visant à faire converger les approches 3D et IHM pour mieux établir le couplage de Techniques d'Interaction Avancées avec des Environnements Virtuels 3D Interactifs. Après une analyse des méthodes de conception de techniques d'interaction pour les EV3D, une première contribution de nos travaux consiste en un cadre de conception de l'interaction 3D. En y agrégeant les problématiques issues de la 3D et de l'IHM, ce cadre de conception permet d'identifier les différents éléments de couplages impliqués lors d'une interaction avec un EV3D. Il se base sur l'analyse des liens entre les tâches utilisateurs et les éléments de l'EV3D impactés par ces tâches. Afin de caractériser finement chaque lien, nous avons introduit la notation 3DIM (3D Interaction Modality) qui décrit les caractéristiques des différents éléments constituant une " modalité d'interaction 3D " permettant la réalisation d'une tâche d'interaction de l'utilisateur dans un EV3D. Nous avons regroupé ces éléments en 6 blocs : l'utilisateur, les actions physiques, les objets physiques manipulés, les dispositifs utilisés, les comportements 3D et les éléments 3D. Nous complétons ce cadre conceptuel par des propriétés analytiques qui permettent de guider le concepteur et procurent ainsi un caractère descriptif, évaluatif et génératif à notre modèle conceptuel de techniques d'interaction avancées pour des EV3D. Dans la cadre d'une collaboration avec le musée de l'observatoire du Pic du Midi, une mise en œuvre de ce cadre nous a conduit à concevoir et développer des techniques d'interaction tangibles et basées smartphone. Ces techniques sont utilisées par les visiteurs du musée dans un EV3D représentatif du Télescope Bernard Lyot pour l'explorer et comprendre son fonctionnement. Nous avons mené trois évaluations utilisateur afin d'explorer l'usage d'un smartphone utilisé de trois manière différentes : comme dispositif tactile, comme un objet tangible ou comme support pour une interaction gestuelle autour du dispositif pour naviguer, sélectionner ou manipuler un objet 3D dans un EV3D affiché sur un grand écran distant
    corecore