2,729 research outputs found

    Use of Augmented Reality in Human Wayfinding: A Systematic Review

    Full text link
    Augmented reality technology has emerged as a promising solution to assist with wayfinding difficulties, bridging the gap between obtaining navigational assistance and maintaining an awareness of one's real-world surroundings. This article presents a systematic review of research literature related to AR navigation technologies. An in-depth analysis of 65 salient studies was conducted, addressing four main research topics: 1) current state-of-the-art of AR navigational assistance technologies, 2) user experiences with these technologies, 3) the effect of AR on human wayfinding performance, and 4) impacts of AR on human navigational cognition. Notably, studies demonstrate that AR can decrease cognitive load and improve cognitive map development, in contrast to traditional guidance modalities. However, findings regarding wayfinding performance and user experience were mixed. Some studies suggest little impact of AR on improving outdoor navigational performance, and certain information modalities may be distracting and ineffective. This article discusses these nuances in detail, supporting the conclusion that AR holds great potential in enhancing wayfinding by providing enriched navigational cues, interactive experiences, and improved situational awareness.Comment: 52 page

    Experimental Evaluation of Indoor Navigation Devices

    Get PDF
    The final, definitive version of this paper has been published in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 59/1, December 2016 published by SAGE Publishing, All rights reserved.Augmented reality (AR) interfaces for indoor navigation on handheld mobile devices seem to greatly enhance directional assistance and user engagement, but it is sometimes challenging for users to hold the device at specific position and orientation during navigation. Previous studies have not adequately explored wearable devices in this context. In the current study, we developed a prototype AR indoor navigation application in order to evaluate and compare handheld devices and wearable devices such as Google Glass, in terms of performance, workload, and perceived usability. The results showed that although the wearable device was perceived to have better accuracy, its overall navigation performance and workload were still similar to a handheld device. We also found that digital navigation aids were better than paper maps in terms of shorter task completion time and lower workload, but digital navigation aids also resulted in worse route/map retention.NSERC Discovery Grant (RGPIN-2015-04134

    Integração de localização baseada em movimento na aplicação móvel EduPARK

    Get PDF
    More and more, mobile applications require precise localization solutions in a variety of environments. Although GPS is widely used as localization solution, it may present some accuracy problems in special conditions such as unfavorable weather or spaces with multiple obstructions such as public parks. For these scenarios, alternative solutions to GPS are of extreme relevance and are widely studied recently. This dissertation studies the case of EduPARK application, which is an augmented reality application that is implemented in the Infante D. Pedro park in Aveiro. Due to the poor accuracy of GPS in this park, the implementation of positioning and marker-less augmented reality functionalities presents difficulties. Existing relevant systems are analyzed, and an architecture based on pedestrian dead reckoning is proposed. The corresponding implementation is presented, which consists of a positioning solution using the sensors available in the smartphones, a step detection algorithm, a distance traveled estimator, an orientation estimator and a position estimator. For the validation of this solution, functionalities were implemented in the EduPARK application for testing purposes and usability tests performed. The results obtained show that the proposed solution can be an alternative to provide accurate positioning within the Infante D. Pedro park, thus enabling the implementation of functionalities of geocaching and marker-less augmented reality.Cada vez mais, as aplicações móveis requerem soluções de localização precisa nos mais variados ambientes. Apesar de o GPS ser amplamente usado como solução para localização, pode apresentar alguns problemas de precisão em condições especiais, como mau tempo, ou espaços com várias obstruções, como parques públicos. Para estes casos, soluções alternativas ao GPS são de extrema relevância e veem sendo desenvolvidas. A presente dissertação estuda o caso do projeto EduPARK, que é uma aplicação móvel de realidade aumentada para o parque Infante D. Pedro em Aveiro. Devido à fraca precisão do GPS nesse parque, a implementação de funcionalidades baseadas no posionamento e de realidade aumentada sem marcadores apresenta dificuldades. São analisados sistemas relevantes existentes e é proposta uma arquitetura baseada em localização de pedestres. Em seguida é apresentada a correspondente implementação, que consiste numa solução de posicionamento usando os sensores disponiveis nos smartphones, um algoritmo de deteção de passos, um estimador de distância percorrida, um estimador de orientação e um estimador de posicionamento. Para a validação desta solução, foram implementadas funcionalidades na aplicação EduPARK para fins de teste, e realizados testes com utilizadores e testes de usabilidade. Os resultados obtidos demostram que a solução proposta pode ser uma alternativa para a localização no interior do parque Infante D. Pedro, viabilizando desta forma a implementação de funcionalidades baseadas no posicionamento e de realidade aumenta sem marcadores.EduPARK é um projeto financiado por Fundos FEDER através do Programa Operacional Competitividade e Internacionalização - COMPETE 2020 e por Fundos Nacionais através da FCT - Fundação para a Ciência e a Tecnologia no âmbito do projeto POCI-01-0145-FEDER-016542.Mestrado em Engenharia Informátic

    Augmented Reality and Gamification in Heritage Museums

    Get PDF

    LabelFusion: A Pipeline for Generating Ground Truth Labels for Real RGBD Data of Cluttered Scenes

    Full text link
    Deep neural network (DNN) architectures have been shown to outperform traditional pipelines for object segmentation and pose estimation using RGBD data, but the performance of these DNN pipelines is directly tied to how representative the training data is of the true data. Hence a key requirement for employing these methods in practice is to have a large set of labeled data for your specific robotic manipulation task, a requirement that is not generally satisfied by existing datasets. In this paper we develop a pipeline to rapidly generate high quality RGBD data with pixelwise labels and object poses. We use an RGBD camera to collect video of a scene from multiple viewpoints and leverage existing reconstruction techniques to produce a 3D dense reconstruction. We label the 3D reconstruction using a human assisted ICP-fitting of object meshes. By reprojecting the results of labeling the 3D scene we can produce labels for each RGBD image of the scene. This pipeline enabled us to collect over 1,000,000 labeled object instances in just a few days. We use this dataset to answer questions related to how much training data is required, and of what quality the data must be, to achieve high performance from a DNN architecture

    Analyzing the Impact of Spatio-Temporal Sensor Resolution on Player Experience in Augmented Reality Games

    Get PDF
    Along with automating everyday tasks of human life, smartphones have become one of the most popular devices to play video games on due to their interactivity. Smartphones are embedded with various sensors which enhance their ability to adopt new new interaction techniques for video games. These integrated sen- sors, such as motion sensors or location sensors, make the device able to adopt new interaction techniques that enhance usability. However, despite their mobility and embedded sensor capacity, smartphones are limited in processing power and display area compared to desktop computer consoles. When it comes to evaluat- ing Player Experience (PX), players might not have as compelling an experience because the rich graphics environments that a desktop computer can provide are absent on a smartphone. A plausible alternative in this regard can be substituting the virtual game world with a real world game board, perceived through the device camera by rendering the digital artifacts over the camera view. This technology is widely known as Augmented Reality (AR). Smartphone sensors (e.g. GPS, accelerometer, gyro-meter, compass) have enhanced the capability for deploying Augmented Reality technology. AR has been applied to a large number of smartphone games including shooters, casual games, or puzzles. Because AR play environments are viewed through the camera, rendering the digital artifacts consistently and accurately is crucial because the digital characters need to move with respect to sensed orientation, then the accelerometer and gyroscope need to provide su ciently accurate and precise readings to make the game playable. In particular, determining the pose of the camera in space is vital as the appropriate angle to view the rendered digital characters are determined by the pose of the camera. This defines how well the players will be able interact with the digital game characters. Depending in the Quality of Service (QoS) of these sensors, the Player Experience (PX) may vary as the rendering of digital characters are affected by noisy sensors causing a loss of registration. Confronting such problem while developing AR games is di cult in general as it requires creating wide variety of game types, narratives, input modalities as well as user-testing. Moreover, current AR games developers do not have any specific guidelines for developing AR games, and concrete guidelines outlining the tradeoffs between QoS and PX for different genres and interaction techniques are required. My dissertation provides a complete view (a taxonomy) of the spatio-temporal sensor resolution depen- dency of the existing AR games. Four user experiments have been conducted and one experiment is proposed to validate the taxonomy and demonstrate the differential impact of sensor noise on gameplay of different genres of AR games in different aspect of PX. This analysis is performed in the context of a novel instru- mentation technology, which allows the controlled manipulation of QoS on position and orientation sensors. The experimental outcome demonstrated how the QoS of input sensor noise impacts the PX differently while playing AR game of different genre and the key elements creating this differential impact are - the input modality, narrative and game mechanics. Later, concrete guidelines are derived to regulate the sensor QoS as complete set of instructions to develop different genres or AR games

    NavMarkAR: A Landmark-based Augmented Reality (AR) Wayfinding System for Enhancing Spatial Learning of Older Adults

    Full text link
    Wayfinding in complex indoor environments is often challenging for older adults due to declines in navigational and spatial-cognition abilities. This paper introduces NavMarkAR, an augmented reality navigation system designed for smart-glasses to provide landmark-based guidance, aiming to enhance older adults' spatial navigation skills. This work addresses a significant gap in design research, with limited prior studies evaluating cognitive impacts of AR navigation systems. An initial usability test involved 6 participants, leading to prototype refinements, followed by a comprehensive study with 32 participants in a university setting. Results indicate improved wayfinding efficiency and cognitive map accuracy when using NavMarkAR. Future research will explore long-term cognitive skill retention with such navigational aids.Comment: 24 page

    Using Virtual Reality to increase technical performance during rowing workouts

    Get PDF
    Technology is advancing rapidly in virtual reality (VR) and sensors, gathering feedback from our body and the environment we are interacting in. Combining the two technologies gives us the opportunity to create personalized and reactive immersive environments. These environments can be used e.g. for training in dangerous situations (e.g. fire, crashes, etc), or to improve skills with less distraction than regular natural environments would have. The pilot study described in this thesis puts an athlete who is rowing on a stationary rowing machine into a virtual environment. The VR takes movement from several sensors of the ergo-meter and displays those in VR. In addition, metrics on technique are being derived from the sensor data and physiological data. All this is used to investigate if, and to which extent, VR may improve the technical skills of the athlete during the complex sport of rowing. Furthermore, athletes are giving subjective feedback about their experience comparing a standard rowing workout, with the workout using VR. First results indicate better performance and an enhanced experience by the athlete

    Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments

    Get PDF
    Background: Considerable number of indoor navigation systems has been proposed to augment people with visual impairments (VI) about their surroundings. These systems leverage several technologies, such as computer-vision, Bluetooth low energy (BLE), and other techniques to estimate the position of a user in indoor areas. Computer-vision based systems use several techniques including matching pictures, classifying captured images, recognizing visual objects or visual markers. BLE based system utilizes BLE beacons attached in the indoor areas as the source of the radio frequency signal to localize the position of the user. Methods: In this paper, we examine the performance and usability of two computer-vision based systems and BLE-based system. The first system is computer-vision based system, called CamNav that uses a trained deep learning model to recognize locations, and the second system, called QRNav, that utilizes visual markers (QR codes) to determine locations. A field test with 10 blindfolded users has been conducted while using the three navigation systems. Results: The obtained results from navigation experiment and feedback from blindfolded users show that QRNav and CamNav system is more efficient than BLE based system in terms of accuracy and usability. The error occurred in BLE based application is more than 30% compared to computer vision based systems including CamNav and QRNav. Conclusions: The developed navigation systems are able to provide reliable assistance for the participants during real time experiments. Some of the participants took minimal external assistance while moving through the junctions in the corridor areas. Computer vision technology demonstrated its superiority over BLE technology in assistive systems for people with visual impairments. - 2019 The Author(s).Scopu
    corecore