359 research outputs found

    PROCEEDINGS OF THE IEEE SPECIAL ISSUE ON APPLICATIONS OF AUGMENTED REALITY ENVIRONMENTS 1 Augmented Reality for Construction Site Monitoring and Documentation

    Get PDF
    Abstract—Augmented Reality allows for an on-site presentation of information that is registered to the physical environment. Applications from civil engineering, which require users to process complex information, are among those which can benefit particularly highly from such a presentation. In this paper, we will describe how to use Augmented Reality (AR) to support monitoring and documentation of construction site progress. For these tasks, the staff responsible usually requires fast and comprehensible access to progress information to enable comparison to the as-built status as well as to as-planned data. Instead of tediously searching and mapping related information to the actual construction site environment, our AR system allows for the access of information right where it is needed. This is achieved by superimposing progress as well as as-planned information onto the user’s view of the physical environment. For this purpose, we present an approach that uses aerial 3D reconstruction to automatically capture progress information and a mobile AR client for on-site visualization. Within this paper, we will describe in greater detail how to capture 3D, how to register the AR system within the physical outdoor environment, how to visualize progress information in a comprehensible way in an AR overlay and how to interact with this kind of information. By implementing such an AR system, we are able to provide an overview about the possibilities and future applications of AR in the construction industry

    Scene creation and exploration in outdoor augmented reality

    Get PDF
    This thesis investigates Outdoor Augmented Reality (AR) especially for scene creation and exploration aspects.We decompose a scene into several components: a) Device, b) Target Object(s), c) Task, and discuss their interrelations. Based on those relations we outline use-cases and workflows. The main contribution of this thesis is providing AR oriented workflows for selected professional fields specifically for scene creation and exploration purposes, through case studies as well as analyzing the relations between AR scene components. Our contributions inlude, but not limited to: i) analysis of scene components and factoring inherintly available errors, to create a transitional hybrid tracking scheme for multiple targets, ii) a novel image-based approach that uses building block analogy for modelling and introduces volumetric and temporal labeling for annotations, iii) an evaluation of the state of the art X-Ray visualization methods as well as our proposed multi-view method. AR technology and capabilities tend to change rapidly, however we believe the relation between scene components and the practical advantages their analysis provide are valuable. Moreover, we have chosen case studies as diverse as possible in order to cover a wide range of professional field studies. We believe our research is extendible to a variety of field studies for disciplines including but not limited to: Archaeology, architecture, cultural heritage, tourism, stratigraphy, civil engineering, and urban maintenance

    Leveraging cloudlets for immersive collaborative applications

    Get PDF
    To enable immersive applications on mobile devices, the authors propose a component-based cyber foraging framework that optimizes application-specific metrics by not only offloading but also configuring application components at runtime. It also enables collaborative scenarios by sharing components between multiple devices

    Enabling Real-Time Shared Environments on Mobile Head-Mounted Displays

    Get PDF
    Head-Mounted Displays (HMDs) are becoming more prevalent consumer devices, allowing users to experience scenes and environments from a point of view naturally controlled by their movement. However there is limited application of this experiential paradigm to telecommunications -- that is, where a HMD user can 'call' a mobile phone user and begin to look around in their environment. In this thesis we present a telepresence system for connecting mobile phone users with people wearing HMDs, allowing the HMD user to experience the environment of the mobile user in real-time. We developed an Android application that supports generating and transmitting high quality spherical panorama based environments in real-time, and a companion application for HMDs to view those environments live. This thesis focusses on the technical challenges involved with creating panoramic environments of sufficient quality to be suitable for viewing inside a HMD, given the constraints that arise from using mobile phones. We present computer vision techniques optimised for these constrained conditions, justifying the trade-offs made between speed and quality. We conclude by comparing our solution to conceptually similar past research along the metrics of computation speed and output quality

    Leveraging Cloudlets for Immersive Collaborative Applications

    Full text link

    Mobile Augmented Reality for Flood Visualisation

    Get PDF
    Mobile Augmented Reality (MAR) for environmental planning and design has hardly been touched upon, yet mobile smart devices are now capable of complex, interactive, and immersive real time visualisations. We present a real time immersive prototype MAR app for on site content authoring and flood visualisation combining available technologies to reduce implementation complexity. Networked access to live sensor readings provides rich real time annotations. Our main goal was to develop a novel MAR app to complement existing flood risk management (FRM) tools and to understand how it is judged by water experts. We present app development in context of the literature and conduct a small user study. Going beyond the presented work, the flexibility of the app permits a broad range of applications in planning, design and environmental management

    Localisation and tracking of stationary users for extended reality

    Get PDF
    In this thesis, we investigate the topics of localisation and tracking in the context of Extended Reality. In many on-site or outdoor Augmented Reality (AR) applications, users are standing or sitting in one place and performing mostly rotational movements, i.e. stationary. This type of stationary motion also occurs in Virtual Reality (VR) applications such as panorama capture by moving a camera in a circle. Both applications require us to track the motion of a camera in potentially very large and open environments. State-of-the-art methods such as Structure-from-Motion (SfM), and Simultaneous Localisation and Mapping (SLAM), tend to rely on scene reconstruction from significant translational motion in order to compute camera positions. This can often lead to failure in application scenarios such as tracking for seated sport spectators, or stereo panorama capture where the translational movement is small compared to the scale of the environment. To begin with, we investigate the topic of localisation as it is key to providing global context for many stationary applications. To achieve this, we capture our own datasets in a variety of large open spaces including two sports stadia. We then develop and investigate these techniques in the context of these sports stadia using a variety of state-of-the-art localisation approaches. We cover geometry-based methods to handle dynamic aspects of a stadium environment, as well as appearance-based methods, and compare them to a state-of-the-art SfM system to identify the most applicable methods for server-based and on-device localisation. Recent work in SfM has shown that the type of stationary motion that we target can be reliably estimated by applying spherical constraints to the pose estimation. In this thesis, we extend these concepts into a real-time keyframe-based SLAM system for the purposes of AR, and develop a unique data structure for simplifying keyframe selection. We show that our constrained approach can track more robustly in these challenging stationary scenarios compared to state-of-the-art SLAM through both synthetic and real-data tests. In the application of capturing stereo panoramas for VR, this thesis demonstrates the unsuitability of standard SfM techniques for reconstructing these circular videos. We apply and extend recent research in spherically constrained SfM to creating stereo panoramas and compare this with state-of-the-art general SfM in a technical evaluation. With a user study, we show that the motion requirements of our SfM approach are similar to the natural motion of users, and that a constrained SfM approach is sufficient for providing stereoscopic effects when viewing the panoramas in VR
    • …
    corecore