20 research outputs found

    Mobile Augmented Reality for Flood Visualisation in Urban Riverside Landscapes

    Get PDF
    Frequency of flooding events worldwide has increased significantly over the past decades, and with it so has the need to raise citizen’s awareness of potential dangers within local flood zones. Smart phones provide a feasible means with which to educate the public in this way. We present a working smart phone app to engage the public with local flood zones by visualising potential flood levels. An interactive augmented reality (AR) tool provides in situ modeling of simple prototype 3D building models (cuboids) along a riverside, which are used to “occlude” an augmented flood plane within the scene. Flood plane height may be adjusted by the user. We discuss related AR work, tools for real-time in situ geometry modeling, app operation and present and on site demonstration

    Towards exploring future landscapes using augmented reality

    Get PDF
    With increasing pressure to better manage the environment many government and private organisations are studying the relationships between social, economic and environmental factors to determine how they can best be optimised for increased sustainability. The analysis of such relationships are undertaken using computer-based Integrated Catchment Models (ICM). These models are capable of generating multiple scenarios depicting alternative land use alternatives at a variety of temporal and spatial scales, which present (potentially) better Triple-Bottom Line (TBL) outcomes than the prevailing situation. Dissemination of this data is (for the most part) reliant on traditional, static map products however, the ability of such products to display the complexity and temporal aspects is limited and ultimately undervalues both the knowledge incorporated in the models and the capacity of stakeholders to disseminate the complexities through other means. Geovisualization provides tools and methods for disseminating large volumes of spatial (and associated non-spatial) data. Virtual Environments (VE) have been utilised for various aspects of landscape planning for more than a decade. While such systems are capable of visualizing large volumes of data at ever-increasing levels of realism, they restrict the users ability to accurately perceive the (virtual) space. Augmented Reality (AR) is a visualization technique which allows users freedom to explore a physical space and have that space augmented with additional, spatially referenced information. A review of existing mobile AR systems forms the basis of this research. A theoretical mobile outdoor AR system using Common-Of-The-Shelf (COTS) hardware and open-source software is developed. The specific requirements for visualizing land use scenarios in a mobile AR system were derived using a usability engineering approach known as Scenario-Based Design (SBD). This determined the elements required in the user interfaces resulting in the development of a low-fidelity, computer-based prototype. The prototype user interfaces were evaluated using participants from two targeted stakeholder groups undertaking hypothetical use scenarios. Feedback from participants was collected using the cognitive walk-through technique and supplemented by evaluator observations of participants physical actions. Results from this research suggest that the prototype user interfaces did provide the necessary functionality for interacting with land use scenarios. While there were some concerns about the potential implementation of "yet another" system, participants were able to envisage the benefits of visualizing land use scenario data in the physical environment

    Collaborative mixed reality environments: an application for civil engineering

    Get PDF
    The present thesis designs, implements and evaluates a channel for interaction between office and field users through a collaborative mixed reality system. This channel is aimed to be used for civil engineering purposes and is thus oriented toward the design and construction phases. Its application should contribute to the reduction of the challenges faced by those involved in a civil engineering project dealing with communication, collaboration and mutual understanding. Such challenges can become real problems for multidisciplinary teams of architects, engineers and constructors when working on the same project. In the context of this thesis, outdoor users are equipped with a real-time kinematic global positioning system receiver, a notebook, a head-mounted display, a tilt sensor and a compass. A virtual environment representing components of a civil engineering project is displayed before their eyes. Outdoor users share this collaborative virtual environment with indoor ones. They can talk to and see each other through an avatar. Indoor users can take part from any location where Internet is available. The goal of this thesis is to show that a networked solution of at least two users (In this case, indoor and outdoor users) is an opportunity for outdoor users to perform complex tasks whilst experiencing an immersive augmented reality application. Indoor users interact with outdoor ones when handling and navigating the virtual environment, guiding their counterpart through the scene and making clear common points of understanding. The thesis evaluates how users interact within a prototype system using a formative approach. Users are introduced to the system and motivated to “talk loudly”, thus verbalising what they are experiencing during the tests. All users are video-recorded while performing the exercises and interviewed immediately after. The evaluation reveals that users end up experiencing a system that is too immersive, which ends up narrowing their “attentional spotlight” to the virtual environment and not, as desired, experiencing an augmented reality system. The evaluation also makes clear that the design of the virtual environment is eventually more important for users than the system itself, and it is completely the kind of application that it is being used to and who the users are

    An interest point based illumination condition matching approach to photometric registration within augmented reality worlds

    Get PDF
    With recent and continued increases in computing power, and advances in the field of computer graphics, realistic augmented reality environments can now offer inexpensive and powerful solutions in a whole range of training, simulation and leisure applications. One key challenge to maintaining convincing augmentation, and therefore user immersion, is ensuring consistent illumination conditions between virtual and real environments, so that objects appear to be lit by the same light sources. This research demonstrates how real world lighting conditions can be determined from the two-dimensional view of the user. Virtual objects can then be illuminated and virtual shadows cast using these conditions. This new technique uses pairs of interest points from real objects and the shadows that they cast, viewed from a binocular perspective, to determine the position of the illuminant. This research has been initially focused on single point light sources in order to show the potential of the technique and has investigated the relationships between the many parameters of the vision system. Optimal conditions have been discovered by mapping the results of experimentally varying parameters such as FoV, camera angle and pose, image resolution, aspect ratio and illuminant distance. The technique is able to provide increased robustness where greater resolution imagery is used. Under optimal conditions it is possible to derive the position of a real world light source with low average error. An investigation of available literature has revealed that other techniques can be inflexible, slow, or disrupt scene realism. This technique is able to locate and track a moving illuminant within an unconstrained, dynamic world without the use of artificial calibration objects that would disrupt scene realism. The technique operates in real-time as the new algorithms are of low computational complexity. This allows high framerates to be maintained within augmented reality applications. Illuminant updates occur several times a second on an average to high end desktop computer. Future work will investigate the automatic identification and selection of pairs of interest points and the exploration of global illuminant conditions. The latter will include an analysis of more complex scenes and the consideration of multiple and varied light sources.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Keyframe Tagging: Unambiguous Content Delivery for Augmented Reality Environments

    Get PDF
    Context: When considering the use of Augmented Reality to provide navigation cues in a completely unknown environment, the content must be delivered into the environment with a repeatable level of accuracy such that the navigation cues can be understood and interpreted correctly by the user. Aims: This thesis aims to investigate whether a still image based reconstruction of an Augmented Reality environment can be used to develop a content delivery system that providers a repeatable level of accuracy for content placement. It will also investigate whether manipulation of the properties of a Spatial Marker object is sufficient to reduce object selection ambiguity in an Augmented Reality environment. Methods: A series of experiments were conducted to test the separate aspects of these aims. Participants were required to use the developed Keyframe Tagging tool to introduce virtual navigation markers into an Augmented Reality environment, and also to identify objects within an Augmented Reality environment that was signposted using different Virtual Spatial Markers. This tested the accuracy and repeatability of content placement of the approach, while also testing participants’ ability to reliably interpret virtual signposts within an Augmented Reality environment. Finally the Keyframe Tagging tool was tested by an expert user against a pre-existing solution to evaluate the time savings offered by this approach against the overall accuracy of content placement. Results: The average accuracy score for content placement across 20 participants was 64%, categorised as “Good” when compared with an expert benchmark result, while no tags were considered “incorrect” and only 8 from 200 tags were considered to have “Poor” accuracy, supporting the Keyframe Tagging approach. In terms of object identification from virtual cues, some of the predicted cognitive links between virtual marker property and target object did not surface, though participants reliably identified the correct objects across several trials. Conclusions: This thesis has demonstrated that accurate content delivery can be achieved through the use of a still image based reconstruction of an Augmented Reality environment. By using the Keyframe Tagging approach, content can be placed quickly and with a sufficient level of accuracy to demonstrate its utility in the scenarios outlined within this thesis. There are some observable limitations to the approach, which are discussed with the proposals for further work in this area

    Adaptive Vision Based Scene Registration for Outdoor Augmented Reality

    Get PDF
    Augmented Reality (AR) involves adding virtual content into real scenes. Scenes are viewed using a Head-Mounted Display or other display type. In order to place content into the user's view of a scene, the user's position and orientation relative to the scene, commonly referred to as their pose, must be determined accurately. This allows the objects to be placed in the correct positions and to remain there when the user moves or the scene changes. It is achieved by tracking the user in relation to their environment using a variety of technology. One technology which has proven to provide accurate results is computer vision. Computer vision involves a computer analysing images and achieving an understanding of them. This may be locating objects such as faces in the images, or in the case of AR, determining the pose of the user. One of the ultimate goals of AR systems is to be capable of operating under any condition. For example, a computer vision system must be robust under a range of different scene types, and under unpredictable environmental conditions due to variable illumination and weather. The majority of existing literature tests algorithms under the assumption of ideal or 'normal' imaging conditions. To ensure robustness under as many circumstances as possible it is also important to evaluate the systems under adverse conditions. This thesis seeks to analyse the effects that variable illumination has on computer vision algorithms. To enable this analysis, test data is required to isolate weather and illumination effects, without other factors such as changes in viewpoint that would bias the results. A new dataset is presented which also allows controlled viewpoint differences in the presence of weather and illumination changes. This is achieved by capturing video from a camera undergoing a repeatable motion sequence. Ground truth data is stored per frame allowing images from the same position under differing environmental conditions, to be easily extracted from the videos. An in depth analysis of six detection algorithms and five matching techniques demonstrates the impact that non-uniform illumination changes can have on vision algorithms. Specifically, shadows can degrade performance and reduce confidence in the system, decrease reliability, or even completely prevent successful operation. An investigation into approaches to improve performance yields techniques that can help reduce the impact of shadows. A novel algorithm is presented that merges reference data captured at different times, resulting in reference data with minimal shadow effects. This can significantly improve performance and reliability when operating on images containing shadow effects. These advances improve the robustness of computer vision systems and extend the range of conditions in which they can operate. This can increase the usefulness of the algorithms and the AR systems that employ them

    Tangible user interfaces for augmented reality

    Get PDF
    Master'sMASTER OF ENGINEERIN
    corecore