1,418 research outputs found

    Scene creation and exploration in outdoor augmented reality

    Get PDF
    This thesis investigates Outdoor Augmented Reality (AR) especially for scene creation and exploration aspects.We decompose a scene into several components: a) Device, b) Target Object(s), c) Task, and discuss their interrelations. Based on those relations we outline use-cases and workflows. The main contribution of this thesis is providing AR oriented workflows for selected professional fields specifically for scene creation and exploration purposes, through case studies as well as analyzing the relations between AR scene components. Our contributions inlude, but not limited to: i) analysis of scene components and factoring inherintly available errors, to create a transitional hybrid tracking scheme for multiple targets, ii) a novel image-based approach that uses building block analogy for modelling and introduces volumetric and temporal labeling for annotations, iii) an evaluation of the state of the art X-Ray visualization methods as well as our proposed multi-view method. AR technology and capabilities tend to change rapidly, however we believe the relation between scene components and the practical advantages their analysis provide are valuable. Moreover, we have chosen case studies as diverse as possible in order to cover a wide range of professional field studies. We believe our research is extendible to a variety of field studies for disciplines including but not limited to: Archaeology, architecture, cultural heritage, tourism, stratigraphy, civil engineering, and urban maintenance

    Augmented Reality for Subsurface Utility Engineering, Revisited

    Get PDF

    Evaluation of Multi-Level Cognitive Maps for Supporting Between-Floor Spatial Behavior in Complex Indoor Environments

    Get PDF
    People often become disoriented when navigating in complex, multi-level buildings. To efficiently find destinations located on different floors, navigators must refer to a globally coherent mental representation of the multi-level environment, which is termed a multi-level cognitive map. However, there is a surprising dearth of research into underlying theories of why integrating multi-level spatial knowledge into a multi-level cognitive map is so challenging and error-prone for humans. This overarching problem is the core motivation of this dissertation. We address this vexing problem in a two-pronged approach combining study of both basic and applied research questions. Of theoretical interest, we investigate questions about how multi-level built environments are learned and structured in memory. The concept of multi-level cognitive maps and a framework of multi-level cognitive map development are provided. We then conducted a set of empirical experiments to evaluate the effects of several environmental factors on users’ development of multi-level cognitive maps. The findings of these studies provide important design guidelines that can be used by architects and help to better understand the research question of why people get lost in buildings. Related to application, we investigate questions about how to design user-friendly visualization interfaces that augment users’ capability to form multi-level cognitive maps. An important finding of this dissertation is that increasing visual access with an X-ray-like visualization interface is effective for overcoming the disadvantage of limited visual access in built environments and assists the development of multi-level cognitive maps. These findings provide important human-computer interaction (HCI) guidelines for visualization techniques to be used in future indoor navigation systems. In sum, this dissertation adopts an interdisciplinary approach, combining theories from the fields of spatial cognition, information visualization, and HCI, addressing a long-standing and ubiquitous problem faced by anyone who navigates indoors: why do people get lost inside multi-level buildings. Results provide both theoretical and applied levels of knowledge generation and explanation, as well as contribute to the growing field of real-time indoor navigation systems

    Projection Grid Cues: An Efficient Way to Perceive the Depths of Underground Objects in Augmented Reality

    Get PDF
    La réalité augmentée est un outil de plus en plus utilisé pour vi- sualiser des données 3D propres à certains métiers. Cependant, les indices visuels standards n’ont pas été évalués dans un contexte im- pliquant une occultation physique (par ex. le sol) des objets virtuels. Nous souhaitons donc évaluer les avantages et les inconvénients de la combinaison et de l’hybridation de deux sortes d’indices visuels : une grille représentant le sol et la projection par le dessus d’objets souterrains. Plus spécifiquement, nous explorons comment chaque combinaison contribue à la bonne perception de la position et de la profondeur des objets. La projection seule ou combinée à la grille génère 2.7 fois moins d’erreurs et génère une charge mentale per- çue 2.5 fois inférieure à la grille seule ou aucun indice. Notre étude montre qu’il s’agit de deux techniques efficaces pour visualiser des objets souterrains. Également, nous recommandons l’utilisation d’une technique ou d’une autre en fonction des conditions dans lesquelles elles seront utilisées.Augmented Reality is increasingly used for visualizing underground networks. However, standard visual cues for depth perception have never been thoroughly evaluated using user experiments in a context involving physical occlusion (e.g. ground) of virtual objects (e.g. elements of buried a network). We hence evaluate the benefits and drawbacks of two techniques based on the combinations of two well-known depth cues: grid and shadows anchors. More specifically, we explore how each combination can contribute to positioning, and depth perception. We show that with shadows anchors only or combined with the grid, users generate 2.7 times fewer errors and have a 2.5 times lower perceived workload (NASA-TLXscore) than with the grid only or no visual cue. Our investigation study shows that they are two effective techniques for visualizing underground objects. We also recommend the use of one technique or another depending on the situation in which they will be use

    Scalable and Extensible Augmented Reality with Applications in Civil Infrastructure Systems.

    Full text link
    In Civil Infrastructure System (CIS) applications, the requirement of blending synthetic and physical objects distinguishes Augmented Reality (AR) from other visualization technologies in three aspects: 1) it reinforces the connections between people and objects, and promotes engineers’ appreciation about their working context; 2) It allows engineers to perform field tasks with the awareness of both the physical and synthetic environment; 3) It offsets the significant cost of 3D Model Engineering by including the real world background. The research has successfully overcome several long-standing technical obstacles in AR and investigated technical approaches to address fundamental challenges that prevent the technology from being usefully deployed in CIS applications, such as the alignment of virtual objects with the real environment continuously across time and space; blending of virtual entities with their real background faithfully to create a sustained illusion of co- existence; integrating these methods to a scalable and extensible computing AR framework that is openly accessible to the teaching and research community, and can be readily reused and extended by other researchers and engineers. The research findings have been evaluated in several challenging CIS applications where the potential of having a significant economic and social impact is high. Examples of validation test beds implemented include an AR visual excavator-utility collision avoidance system that enables spotters to ”see” buried utilities hidden under the ground surface, thus helping prevent accidental utility strikes; an AR post-disaster reconnaissance framework that enables building inspectors to rapidly evaluate and quantify structural damage sustained by buildings in seismic events such as earthquakes or blasts; and a tabletop collaborative AR visualization framework that allows multiple users to observe and interact with visual simulations of engineering processes.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/96145/1/dsuyang_1.pd

    Architectural cue model in evacuation simulation for underground space design

    Get PDF

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles
    • …
    corecore