409 research outputs found

    Use of Augmented Reality in Human Wayfinding: A Systematic Review

    Full text link
    Augmented reality technology has emerged as a promising solution to assist with wayfinding difficulties, bridging the gap between obtaining navigational assistance and maintaining an awareness of one's real-world surroundings. This article presents a systematic review of research literature related to AR navigation technologies. An in-depth analysis of 65 salient studies was conducted, addressing four main research topics: 1) current state-of-the-art of AR navigational assistance technologies, 2) user experiences with these technologies, 3) the effect of AR on human wayfinding performance, and 4) impacts of AR on human navigational cognition. Notably, studies demonstrate that AR can decrease cognitive load and improve cognitive map development, in contrast to traditional guidance modalities. However, findings regarding wayfinding performance and user experience were mixed. Some studies suggest little impact of AR on improving outdoor navigational performance, and certain information modalities may be distracting and ineffective. This article discusses these nuances in detail, supporting the conclusion that AR holds great potential in enhancing wayfinding by providing enriched navigational cues, interactive experiences, and improved situational awareness.Comment: 52 page

    Appearance-based indoor localization: a comparison of patch descriptor performance

    Get PDF
    Vision is one of the most important of the senses, and humans use it extensively during navigation. We evaluated different types of image and video frame descriptors that could be used to determine distinctive visual landmarks for localizing a person based on what is seen by a camera that they carry. To do this, we created a database containing over 3 km of video-sequences with ground-truth in the form of distance travelled along different corridors. Using this database, the accuracy of localization - both in terms of knowing which route a user is on - and in terms of position along a certain route, can be evaluated. For each type of descriptor, we also tested different techniques to encode visual structure and to search between journeys to estimate a user's position. The techniques include single-frame descriptors, those using sequences of frames, and both colour and achromatic descriptors. We found that single-frame indexing worked better within this particular dataset. This might be because the motion of the person holding the camera makes the video too dependent on individual steps and motions of one particular journey. Our results suggest that appearance-based information could be an additional source of navigational data indoors, augmenting that provided by, say, radio signal strength indicators (RSSIs). Such visual information could be collected by crowdsourcing low-resolution video feeds, allowing journeys made by different users to be associated with each other, and location to be inferred without requiring explicit mapping. This offers a complementary approach to methods based on simultaneous localization and mapping (SLAM) algorithms.Comment: Accepted for publication on Pattern Recognition Letter

    Improving the acquisition of spatial knowledge when navigating with an augmented reality navigation system

    Full text link
    Navigation is a process humans use whenever they move. There are more complex tasks like finding our way in a new city and easier tasks like getting a cup of coffee. Daniel Montello (2005, p. 2) defines navigation as “the coordinated and goal-directed movement through the environment by organisms or intelligent machines”. When navigating in an unknown environment, humans often rely on assisted wayfinding by some sort of navigation aid. During the last years, the preferred navigation system shifted from printed maps to electronic and thus dynamic navigation systems on our smartphones. Recently, mixed reality and virtual reality approaches such as augmented reality (AR) have become an interesting alternative to the classical smartphone navigation. This although, the first attempts to AR were already made in the middle of the last century. The major advantages of AR navigation systems are that localisation and above all also tracking tasks are made by the system and that the navigation instructions are directly laid into the environment. The main drawback, on the other hand, is that the more tasks are made by the system, the less spatial learning is achieved by a human. The goal of this thesis is to examine ways to improve the process of spatial learning on assisted wayfinding. An experiment where participants are guided through a test environment by an AR system is set up to test these ways. After completing the route, the participants had to fill out a questionnaire about landmarks and intersections, which they had encountered on the route. The concrete goals of the thesis are to find out (1) whether giving more spatial information will improve spatial learning, (2) whether the placement of navigation instructions has an influence (positive or negative) on spatial learning, (3) whether the type of landmark has an influence on how well it is recalled and (4) how well landmark and route knowledge is built after having completed the route once. The results of the experiment suggest that giving background information to certain landmarks do not lead to a significantly different performance in spatial learning (p = .691). The result could also show that there is no difference whether a landmark is highlighted by a navigation instruction or not (p = .330). The analyses of landmark and route knowledge has shown that the participants have built less landmark knowledge than route knowledge after the run, as they have approx. 50 % of the landmarks correct but 67 % of the intersections. Interesting and in this case significant is the difference between the types of landmarks (p = .018). 3D objects are recalled much better than other landmarks. Also significant (p = 6.14e-3) but unfortunately not very robust is the influence of the age on the acquisition of route knowledge. As the age distribution is very unbalanced, these results have to be interpreted with caution. Following the findings of this thesis, it is suggested to conduct a series of experiments with an eye tracker to learn more about how the visual focus of people using AR as a wayfinding assistance behaves

    Spatial knowledge acquisition for pedestrian navigation: A comparative study between smartphones and AR glasses

    Get PDF
    Smartphone map-based pedestrian navigation is known to have a negative effect on the long-term acquisition of spatial knowledge and memorisation of landmarks. Landmark-based navigation has been proposed as an approach that can overcome such limitations. In this work, we investigate how different interaction technologies, namely smartphones and augmented reality (AR) glasses, can affect the acquisition of spatial knowledge when used to support landmark-based pedestrian navigation. We conducted a study involving 20 participants, using smartphones or augmented reality glasses for pedestrian navigation. We studied the effects of these systems on landmark memorisation and spatial knowledge acquisition over a period of time. Our results show statistically significant differences in spatial knowledge acquisition between the two technologies, with the augmented reality glasses enabling better memorisation of landmarks and paths

    Interaction in motion: designing truly mobile interaction

    Get PDF
    The use of technology while being mobile now takes place in many areas of people’s lives in a wide range of scenarios, for example users cycle, climb, run and even swim while interacting with devices. Conflict between locomotion and system use can reduce interaction performance and also the ability to safely move. We discuss the risks of such “interaction in motion”, which we argue make it desirable to design with locomotion in mind. To aid such design we present a taxonomy and framework based on two key dimensions: relation of interaction task to locomotion task, and the amount that a locomotion activity inhibits use of input and output interfaces. We accompany this with four strategies for interaction in motion. With this work, we ultimately aim to enhance our understanding of what being “mobile” actually means for interaction, and help practitioners design truly mobile interactions

    Robust localization with wearable sensors

    Get PDF
    Measuring physical movements of humans and understanding human behaviour is useful in a variety of areas and disciplines. Human inertial tracking is a method that can be leveraged for monitoring complex actions that emerge from interactions between human actors and their environment. An accurate estimation of motion trajectories can support new approaches to pedestrian navigation, emergency rescue, athlete management, and medicine. However, tracking with wearable inertial sensors has several problems that need to be overcome, such as the low accuracy of consumer-grade inertial measurement units (IMUs), the error accumulation problem in long-term tracking, and the artefacts generated by movements that are less common. This thesis focusses on measuring human movements with wearable head-mounted sensors to accurately estimate the physical location of a person over time. The research consisted of (i) providing an overview of the current state of research for inertial tracking with wearable sensors, (ii) investigating the performance of new tracking algorithms that combine sensor fusion and data-driven machine learning, (iii) eliminating the effect of random head motion during tracking, (iv) creating robust long-term tracking systems with a Bayesian neural network and sequential Monte Carlo method, and (v) verifying that the system can be applied with changing modes of behaviour, defined as natural transitions from walking to running and vice versa. This research introduces a new system for inertial tracking with head-mounted sensors (which can be placed in, e.g. helmets, caps, or glasses). This technology can be used for long-term positional tracking to explore complex behaviours

    Interaction in motion: designing truly mobile interaction

    Get PDF
    The use of technology while being mobile now takes place in many areas of people’s lives in a wide range of scenarios, for example users cycle, climb, run and even swim while interacting with devices. Conflict between locomotion and system use can reduce interaction performance and also the ability to safely move. We discuss the risks of such “interaction in motion”, which we argue make it desirable to design with locomotion in mind. To aid such design we present a taxonomy and framework based on two key dimensions: relation of interaction task to locomotion task, and the amount that a locomotion activity inhibits use of input and output interfaces. We accompany this with four strategies for interaction in motion. With this work, we ultimately aim to enhance our understanding of what being “mobile” actually means for interaction, and help practitioners design truly mobile interactions

    Exploring the limits of PDR-based indoor localisation systems under realistic conditions

    Get PDF
    Pedestrian Dead Reckoning (PDR) plays an important role in many (hybrid) indoor positioning systems since it enables frequent, granular position updates. However, the accumulation of errors creates a need for external error correction. In this work, we explore the limits of PDR under realistic conditions using our graph-based system as an example. For this purpose, we collect sensor data while the user performs an actual navigation task using a navigation application on a smartphone. To assess the localisation performance, we introduce a task-oriented metric based on the idea of landmark navigation: instead of specifying the error metrically, we measure the ability to determine the correct segment of an indoor route, which in turn enables the navigation system to give correct instructions. We conduct offline simulations with the collected data in order to identify situations where position tracking fails and explore different options how to mitigate the issues, e.g. through detection of special features along the user’s path or through additional sensors. Our results show that the magnetic compass is often unreliable under realistic conditions and that resetting the position at strategically chosen decision points significantly improves positioning accuracy

    A dialogue based mobile virtual assistant for tourists: The SpaceBook Project

    Get PDF
    Ubiquitous mobile computing offers innovative approaches in the delivery of information that can facilitate free roaming of the city, informing and guiding the tourist as the city unfolds before them. However making frequent visual reference to mobile devices can be distracting, the user having to interact via a small screen thus disrupting the explorative experience. This research reports on an EU funded project, SpaceBook, that explored the utility of a hands-free, eyes-free virtual tour guide, that could answer questions through a spoken dialogue user interface and notify the user of interesting features in view while guiding the tourist to various destinations. Visibility modelling was carried out in real-time based on a LiDAR sourced digital surface model, fused with a variety of map and crowd sourced datasets (e.g. Ordnance Survey, OpenStreetMap, Flickr, Foursquare) to establish the most interesting landmarks visible from the user's location at any given moment. A number of variations of the SpaceBook system were trialled in Edinburgh (Scotland). The research highlighted the pleasure derived from this novel form of interaction and revealed the complexity of prioritising route guidance instruction alongside identification, description and embellishment of landmark information – there being a delicate balance between the level of information ‘pushed’ to the user, and the user's requests for further information. Among a number of challenges, were issues regarding the fidelity of spatial data and positioning information required for pedestrian based systems – the pedestrian having much greater freedom of movement than vehicles
    corecore