140 research outputs found

    Human factors in instructional augmented reality for intravehicular spaceflight activities and How gravity influences the setup of interfaces operated by direct object selection

    Get PDF
    In human spaceflight, advanced user interfaces are becoming an interesting mean to facilitate human-machine interaction, enhancing and guaranteeing the sequences of intravehicular space operations. The efforts made to ease such operations have shown strong interests in novel human-computer interaction like Augmented Reality (AR). The work presented in this thesis is directed towards a user-driven design for AR-assisted space operations, iteratively solving issues arisen from the problem space, which also includes the consideration of the effect of altered gravity on handling such interfaces.Auch in der bemannten Raumfahrt steigt das Interesse an neuartigen Benutzerschnittstellen, um nicht nur die Mensch-Maschine-Interaktion effektiver zu gestalten, sondern auch um einen korrekten Arbeitsablauf sicherzustellen. In der Vergangenheit wurden wiederholt Anstrengungen unternommen, Innenbordarbeiten mit Hilfe von Augmented Reality (AR) zu erleichtern. Diese Arbeit konzentriert sich auf einen nutzerorientierten AR-Ansatz, welcher zum Ziel hat, die Probleme schrittweise in einem iterativen Designprozess zu lösen. Dies erfordert auch die Berücksichtigung veränderter Schwerkraftbedingungen

    The Effect of an Occluder on the Accuracy of Depth Perception in Optical See-Through Augmented Reality

    Get PDF
    Three experiments were conducted to study the effect of an occluder on the accuracy of nearield depth perception in optical-see-through augmented reality (AR). The first experiment was a duplicate experiment of the one in Edwards et al. [2004]. We found more accurate results than Edwards et al.’s work and did not find the occluder’s main effect or its two-way interaction effect with distance on the accuracy of observers’ depth matching. The second experiment was an updated version of the first one using a within-subject design and a more accurate calibration method. The results were that errors ranged from –5 to 3 mm when the occluder was present, –3 to 2 mm when the occluder was absent, and observers judged the virtual object to be closer after the presentation of the occluder. The third experiment was conducted on three subjects who were depth perception researchers. The result showed significant individual effects

    Understanding Users’ Capability to Transfer Information between Mixed and Virtual Reality: Position Estimation across Modalities and Perspectives

    Get PDF
    International audienceMixed Reality systems combine physical and digital worlds, with great potential for the future of HCI. It is possible to design systems that support flexible degrees of virtuality by combining complementary technologies. In order for such systems to succeed, users must be able to create unified mental models out of heterogeneous representations. In this paper, we present two studies focusing on the users' accuracy on heterogeneous systems using Spatial Augmented Reality (SAR) and immersive Virtual Reality (VR) displays, and combining viewpoints (egocentric and exocentric). The results show robust estimation capabilities across conditions and viewpoints

    Natural freehand grasping of virtual objects for augmented reality

    Get PDF
    Grasping is a primary form of interaction with the surrounding world, and is an intuitive interaction technique by nature due to the highly complex structure of the human hand. Translating this versatile interaction technique to Augmented Reality (AR) can provide interaction designers with more opportunities to implement more intuitive and realistic AR applications. The work presented in this thesis uses quantifiable measures to evaluate the accuracy and usability of natural grasping of virtual objects in AR environments, and presents methods for improving this natural form of interaction. Following a review of physical grasping parameters and current methods of mediating grasping interactions in AR, a comprehensive analysis of natural freehand grasping of virtual objects in AR is presented to assess the accuracy, usability and transferability of this natural form of grasping to AR environments. The analysis is presented in four independent user studies (120 participants, 30 participants for each study and 5760 grasping tasks in total), where natural freehand grasping performance is assessed for a range of virtual object sizes, positions and types in terms of accuracy of grasping, task completion time and overall system usability. Findings from the first user study in this work highlighted two key problems for natural grasping in AR; namely inaccurate depth estimation and inaccurate size estimation of virtual objects. Following the quantification of these errors, three different methods for mitigating user errors and assisting users during natural grasping were presented and analysed; namely dual view visual feedback, drop shadows and additional visual feedback when adding user based tolerances during interaction tasks. Dual view visual feedback was found to significantly improve user depth estimation, however this method also significantly increased task completion time. Drop shadows provided an alternative, and a more usable solution, to dual view visual feedback through significantly improving depth estimation, task completion time and the overall usability of natural grasping. User based tolerances negated the fundamental problem of inaccurate size estimation of virtual objects, through enabling users to perform natural grasping without the need of being highly accurate in their grasping performance, thus providing evidence that natural grasping can be usable in task based AR environments. Finally recommendations for allowing and further improving natural grasping interaction in AR environments are provided, along with guidelines for translating this form of natural grasping to other AR environments and user interfaces

    The influence of body orientation relative to gravity on egocentric distance estimates in immersive virtual environments

    Get PDF
    Virtual reality head mounted displays (VR-HMD) are a flexible tool that can immerse individuals into a variety of virtual environments and can account for an individuals head orientation within these environments. Additionally, VR-HMD’s can allow participants to explore environments while maintaining different body positions (e.g sitting, and laying down). How these discrepancies between real world body position and virtual environment impact the perception of virtual space or, additionally, how a visual upright with incongruent changes in head orientation affects space perception within VR has not been fully defined. In this study we hoped to further understand how changes in orientation (laying supine, laying prone, laying on left side and, being upright) while a steady visual virtual upright (presented in the Oculus Rift DK1) is maintained can effect the perception of distance. We used a new psychophysics perceptual matching based approach with two different probe configurations (L- and T shape) in order to extract distance perception thresholds in the four previously mentioned positions at egocentric distances of 4, 5, and,6 meters. Our results indicate that changes in orientation with respect to gravity impact the perception of distances within a virtual environment when it is maintained at a visual upright. Particularly we found significant differences between perceived distances in the upright condition compared to the prone and laying on left side positions. Additionally, we found that distance perception results were impacted by differences in probe configuration. Our results add to a body of work looking at how changes in head and body orientation can affect the perception of distance, however, more research is needed in order to fully understand how these changes with respect to gravity are affecting the perception of space within these virtual environments

    X-ray vision at action space distances: depth perception in context

    Get PDF
    Accurate and usable x-ray vision has long been a goal in augmented reality (AR) research and development. X-ray vision, or the ability to comprehend location and object information when such is viewed through an opaque barrier, would be imminently useful in a variety of contexts, including industrial, disaster reconnaissance, and tactical applications. In order for x-ray vision to be a useful tool for many of these applications, it would need to extend operators’ perceptual awareness of the task or environment. The effectiveness with which x-ray vision can do this is of significant research interest and is a determinant of its usefulness in an application context. In substance, then, it is crucial to evaluate the effectiveness of x-ray vision—how does information presented through x-ray vision compare to real-world information? This approach requires narrowing as x-ray vision suffers from inherent limitations, analogous to viewing an object through a window. In both cases, information is presented beyond the local context, exists past an apparently solid object, and is limited by certain conditions. Further, in both cases, the naturally suggestive use cases occur over action space distances. These distances range from 1.5 to 30 meters and represent the area in which observers might contemplate immediate visually directed actions. These actions, simple tasks with a visual antecedent, represent action potentials for x-ray vision; in effect, x-ray vision extends an operators’ awareness and ability to visualize these actions into a new context. Thus, this work seeks to answer the question “Can a real window be replaced with an AR window?” This evaluation focuses on perceived object location, investigated through a series of experiments using visually directed actions as experimental measures. This approach leverages established methodology to investigate this topic by experimentally analyzing each of several distinct variables on a continuum between real-world depth perception and fully realized x-ray vision. It was found that a real window could not be replaced with an AR window without some loss of depth perception acuity and accuracy. However, no significant difference was found between a target viewed through an opaque wall and a target viewed through a real window

    Immersive analytics for oncology patient cohorts

    Get PDF
    This thesis proposes a novel interactive immersive analytics tool and methods to interrogate the cancer patient cohort in an immersive virtual environment, namely Virtual Reality to Observe Oncology data Models (VROOM). The overall objective is to develop an immersive analytics platform, which includes a data analytics pipeline from raw gene expression data to immersive visualisation on virtual and augmented reality platforms utilising a game engine. Unity3D has been used to implement the visualisation. Work in this thesis could provide oncologists and clinicians with an interactive visualisation and visual analytics platform that helps them to drive their analysis in treatment efficacy and achieve the goal of evidence-based personalised medicine. The thesis integrates the latest discovery and development in cancer patients’ prognoses, immersive technologies, machine learning, decision support system and interactive visualisation to form an immersive analytics platform of complex genomic data. For this thesis, the experimental paradigm that will be followed is in understanding transcriptomics in cancer samples. This thesis specifically investigates gene expression data to determine the biological similarity revealed by the patient's tumour samples' transcriptomic profiles revealing the active genes in different patients. In summary, the thesis contributes to i) a novel immersive analytics platform for patient cohort data interrogation in similarity space where the similarity space is based on the patient's biological and genomic similarity; ii) an effective immersive environment optimisation design based on the usability study of exocentric and egocentric visualisation, audio and sound design optimisation; iii) an integration of trusted and familiar 2D biomedical visual analytics methods into the immersive environment; iv) novel use of the game theory as the decision-making system engine to help the analytics process, and application of the optimal transport theory in missing data imputation to ensure the preservation of data distribution; and v) case studies to showcase the real-world application of the visualisation and its effectiveness

    Rotational and Translational Velocity and Acceleration Thresholds for the Onset of Cybersickness in Virtual Reality

    Get PDF
    This paper determined rotational and translational velocity and acceleration thresholds for the onset of cybersickness. Cybersickness causes discomfort and discourages the widespread use of virtual reality systems for both recreational and professional use. Visual motion or optic flow is known to be one of the main causes of cybersickness due to the sensory conflict it creates with the vestibular system. The aim of this experiment is to detect rotational and translational velocity and acceleration thresholds that cause the onset of cybersickness. Participants were exposed to a moving particle field in virtual reality for a few seconds per run. The field moved in different directions (longitudinal, lateral, roll, and yaw), with different velocity profiles (steady and accelerating), and different densities. Using a staircase procedure, that controlled the speed or acceleration of the field, we detected the threshold at which participant started to feel temporary symptoms of cybersickness. The optic flow was quantified for each motion type and by modifying the number of features, the same amount of optic flow was present in each scene. Having the same optic flow in each scene allows a direct comparison of the thresholds. The results show that the velocity and acceleration thresholds for rotational optic flow were significantly lower than for translational optic flow. The thresholds suggestively decreased with the decreasing particle density of the scene. Finally, it was found that all the rotational and translational thresholds strongly correlate with each other. While the mean values of the thresholds could be used as guidelines to develop virtual reality applications, the high variability between individuals implies that the individual tuning of motion controls would be more effective to reduce cybersickness while minimizing the impact on the experience of immersion

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality
    • …
    corecore