6,352 research outputs found

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    [DC] Outdoor AR Tracking Evaluation and Tracking with Prior Map

    Get PDF
    We are very interested in addressing the problem of building city-scale AR systems where users can travel anywhere at any time and see the correct graphics registered in the world around them. One crucial requirement for this is accurate tracking and localisation. In my work, I propose to tackle two themes. The first is to examine what good registration means in uncontrolled outdoor environments. The second is to explore how prior information can be used to support wide-area tracking efficiently and robustly

    Handheld Augmented Reality: Effect of registration jitter on cursor-based pointing techniques

    No full text
    International audienceHandheld Augmented Reality relies on the registration of digital content on physical objects. Yet, the accuracy of this registration depends on environmental conditions. It is therefore important to study the impact of registration jitter on interaction and in particular on pointing at augmented objects where precision may be required. We present an experiment that compares the effect of registration jitter on the following two pointing techniques: (1) screen-centered crosshair pointing; and (2) relative pointing with a cursor bound to the physical object's frame of reference and controlled by indirect relative touch strokes on the screen. The experiment considered both tablet and smartphone form factors. Results indicate that relative pointing in the frame of the physical object is less error prone and is less subject to registration jitter than screencentered crosshair pointing

    An Investigation of Skill Acquisition under Conditions of Augmented Reality

    Get PDF
    Augmented reality is a virtual environment that integrates rendered content with the experience of the real world. There is evidence suggesting that augmented reality provides for important spatial constancy of objects relative to the real world coordinate system and that this quality contributes to rapid skill acquisition. The qualities of simulation, through the use of augmented reality, may be incorporated into actual job activities to produce a condition of just-in-time learning. This may make possible the rapid acquisition of information and reliable completion of novel or infrequently performed tasks by individuals possessing a basic skill-set. The purpose of this research has been to investigate the degree to which the acquisition of a skill is enhanced through the use of an augmented reality training device

    The Effectiveness of Monitor-Based Augmented Reality Paradigms for Learning Space-Related Technical Tasks

    Get PDF
    Currently today there are many types of media that can help individuals learn and excel in the on going effort to acquire knowledge for a specific trait or function in a workplace, laboratory, or learning facility. Technology has advanced in the fields of transportation, information gathering, and education. The need for better recall of information is in demand in a wide variety of areas. Augmented reality (AR) is a technology that may help meet this demand. AR is a hybrid of reality and virtual reality (VR) that uses the three-dimensional location viewed through a video or optical see-through media to capture the object\u27s coordinates and add virtual images, objects, or text superimposed on the scene (Azuma, 1997). The purpose of this research is to investigate four different modes of presentation and the effect of those modes on learning and recall of information using monitor-based Augmented Reality. The four modes of presentation are Select, Observe, Interact and Print modes. Each mode possesses different attributes that may affect learning and recall. The Select mode can be described as a mode of presentation that allows movement of the work piece in front of the tracking camera. The Observe mode involves information presentation using a pre-recorded video scene presented with no interaction with the work piece. The Interact mode allows the user to view a pre-recorded video scene that allows the user to point and click on the component of the work piece with a computer mouse on the monitor. The Print mode consists of printed material of each work piece component. It was hypothesized that the Select mode would provide the user with the richest presentation of information due to information access capabilities helping to decrease work time, reduce the amount of error likelihood during usage, enhance the user\u27s motivation for learning tasks, and increase concurrent learning and performances due to recall and retention. It was predicted that the Select mode would result in trainees that would recall the greatest amount of information even after extended periods of time had elapsed. This hypothesis was not supported. No significant differences between the four groups were found

    How Wrong Can You Be:Perception of Static Orientation Errors in Mixed Reality

    Get PDF

    Ambient Intelligence for Next-Generation AR

    Full text link
    Next-generation augmented reality (AR) promises a high degree of context-awareness - a detailed knowledge of the environmental, user, social and system conditions in which an AR experience takes place. This will facilitate both the closer integration of the real and virtual worlds, and the provision of context-specific content or adaptations. However, environmental awareness in particular is challenging to achieve using AR devices alone; not only are these mobile devices' view of an environment spatially and temporally limited, but the data obtained by onboard sensors is frequently inaccurate and incomplete. This, combined with the fact that many aspects of core AR functionality and user experiences are impacted by properties of the real environment, motivates the use of ambient IoT devices, wireless sensors and actuators placed in the surrounding environment, for the measurement and optimization of environment properties. In this book chapter we categorize and examine the wide variety of ways in which these IoT sensors and actuators can support or enhance AR experiences, including quantitative insights and proof-of-concept systems that will inform the development of future solutions. We outline the challenges and opportunities associated with several important research directions which must be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the Springer Handbook of the Metavers
    • …
    corecore