386 research outputs found

    Development and usability analysis of a mixed reality GPS navigator application for the microsoft hololens

    Get PDF
    The present work aims to perform a comparative usability analysis between two Human- Computer Interaction systems (HCI) for global geolocation (GPS) navigators. The intent is to compare the conventional use of a navigation application on a mobile device, such as a smartphone attached to the dashboard of a vehicle, to an implementation in Mixed Reality (MR) powered by the Head Mounted Display (HMD) Microsoft HoloLens. By connecting the MR device to a local network routed by an ordinary cellular phone, which is connected to a mobile data network, it is possible to ubiquitously acquire the phone’s geolocation data, its magnetometer deviation and a route graph of a navigation Application Programming Interface (API) from its current location to a destination entered by the user. Thus, a series of three-dimensional holograms are created at runtime, geolocated and placed around the user, guiding him through a path indicated on the floor, pertinent to the streets around him that lead to the desired destination. Apart from that, arrows are projected on the way at each crucial point of the path, where some maneuver must be performed, e.g., turning right or taking an exit at a roundabout. In a user experiment, performance and usability were assessed. Results show that users deemed the MR solution to offer a higher visibility both to the oncoming traffic and the suggested route, when compared to the conventional interface, being less attention demanding. EEG readings for most participants also exposed a significantly more demanding focus level for the handheld device. Additionally, an easiness to learn and use was indicated for our system, being almost on par with the already known and highly used application tested.O presente trabalho visa realizar uma análise comparativa de usabilidade entre dois sistemas de interação humano-computador para navegadores de geolocalização global (GPS). Foi almejado comparar o uso convencional do sistema, através de um dispositivo móvel tal qual um smartphone afixado ao painel de um veículo, com uma nova implementação em Realidade Mista potencializada pelo HMD Microsoft HoloLens. Conectando o dispositivo de realidade mista (MR) a uma rede local roteada por um aparelho celular convencional, este conectado a uma rede de dados móvel, foi possível receber ubiquamente os dados de sua geolocalização, de seu magnetômetro e um grafo de rota de uma API de navegação de alta disponibilidade partindo do presente local até um destino inserido pelo usuário. Com isso, é criada em tempo de execução uma série de hologramas tridimensionais geolocalizados ao redor do usuário, guiando-o através de um caminho indicado em seu chão, pertinente às ruas a sua volta que o levarão ao destino desejado. Também são projetadas flechas em seu caminho em cada ponto crucial de seu trajeto, onde deve-se realizar alguma manobra, e.g., dobrar à direita ou tomar uma saída de uma rotatória. Em um experimento realizado com usuários reais, seu desempenho e usabilidade foram aferidos. Resultados mostram que os usuários estimaram que a solução em MR oferecia uma visibilidade maior tanto ao tráfego passante quanto à rota sugerida, em comparação à interface convencional, requerindo menos atenção. Leituras de eletroencefalografia (EEG) na maioria dos participantes indicaram uma demanda significativamente maior de atenção focada no uso do dispositivo móvel. Uma grande facilidade de aprendizado e de uso também foi apontada para nosso sistema, estando quase a par da aplicação móvel altamente conhecida e usada

    LandMarkAR: An application to study virtual route instructions and the design of 3D landmarks for indoor pedestrian navigation with a mixed reality head-mounted display

    Get PDF
    Mixed Reality (MR) interfaces on head-mounted displays (HMDs) have the potential to replace screen-based interfaces as the primary interface to the digital world. They potentially offer a more immersive and less distracting experience compared to mobile phones, allowing users to stay focused on their environment and main goals while accessing digital information. Due to their ability to gracefully embed virtual information in the environment, MR HMDs could potentially alleviate some of the issues plaguing users of mobile pedestrian navigation systems, such as distraction, diminished route recall, and reduced spatial knowledge acquisition. However, the complexity of MR technology presents significant challenges, particularly for researchers with limited programming knowledge. This thesis presents “LandMarkAR” to address those challenges. “LandMarkAR” is a HoloLens application that allows researchers to create augmented territories to study human navigation with MR interfaces, even if they have little programming knowledge. “LandMarkAR” was designed using different methods from human-centered design (HCD), such as design thinking and think-aloud testing, and was developed with Unity and the Mixed Reality Toolkit (MRTK). With “LandMarkAR”, researchers can place and manipulate 3D objects as holograms in real-time, facilitating indoor navigation experiments using 3D objects that serve as turn-by-turn instructions, highlights of physical landmarks, or other information researchers may come up with. Researchers with varying technical expertise will be able to use “LandMarkAR” for MR navigation studies. They can opt to utilize the easy-to-use User Interface (UI) on the HoloLens or add custom functionality to the application directly in Unity. “LandMarkAR” empowers researchers to explore the full potential of MR interfaces in human navigation and create meaningful insights for their studies

    Use of Augmented Reality in Human Wayfinding: A Systematic Review

    Full text link
    Augmented reality technology has emerged as a promising solution to assist with wayfinding difficulties, bridging the gap between obtaining navigational assistance and maintaining an awareness of one's real-world surroundings. This article presents a systematic review of research literature related to AR navigation technologies. An in-depth analysis of 65 salient studies was conducted, addressing four main research topics: 1) current state-of-the-art of AR navigational assistance technologies, 2) user experiences with these technologies, 3) the effect of AR on human wayfinding performance, and 4) impacts of AR on human navigational cognition. Notably, studies demonstrate that AR can decrease cognitive load and improve cognitive map development, in contrast to traditional guidance modalities. However, findings regarding wayfinding performance and user experience were mixed. Some studies suggest little impact of AR on improving outdoor navigational performance, and certain information modalities may be distracting and ineffective. This article discusses these nuances in detail, supporting the conclusion that AR holds great potential in enhancing wayfinding by providing enriched navigational cues, interactive experiences, and improved situational awareness.Comment: 52 page

    Enhancing cultural tourism by a mixed reality application for outdoor navigation and information browsing using immersive devices

    Get PDF
    In this paper a mixed reality application is introduced; this application runs on Microsoft Hololens and has been designed to provide information on a city scale. The application was developed to provide information about historical buildings, thus supporting cultural outdoor tourism. The huge amount of multimedia data stored in the archives of the Italian public broadcaster RAI, is used to enrich the user experience. A remote application of image and video analysis receives an image flow by the user and identifies known objects framed in the images. The user can select the object (monument/building/artwork) for which augmented contents have to be displayed (video, text audio); the user can interact with these contents by a set of defined gestures. Moreover, if the object of interest is detected and tracked by the mixed reality application, also 3D contents can be overlapped and aligned with the real world

    The usability of an augmented reality map application on the Microsoft Hololens 2

    Get PDF
    Abstract. Augmented reality (AR) has seen rapid progress in recent years, especially from a consumer standpoint. Hardware, as well as software, is becoming better, cheaper, and more available. As the technology becomes more mainstream, we will see adaptations for many applications currently used on personal computers and smartphones. This thesis aims to explore the adaptation of one such application further by developing and studying the usability and effectiveness of a map application running on one of the most modern AR headsets available to consumers, the Microsoft HoloLens 2. To develop the application, we chose to use the cross-platform game engine Unity. It would provide us an opportunity to develop the application reliably and fast, as the third-party packages available for it would prove to provide plenty of ready-to-use assets and code. In addition, both of the group members had some previous experience in using Unity. While planning the application we studied many research papers to get an understanding of what makes a good AR application. With the application ready for testing, we recruited test subjects from family members who would give us feedback relating to the efficiency and usability of the system as a whole. The test subjects would perform tasks inside the application but also have the opportunity to explore it however much they liked. After the test, they would fill out a questionnaire and participate in an interview, which would then be analyzed further. From analyzing the questionnaire and interview answers, we were able to conclude several things. Firstly the system in its current state provides no additional value in comparison to traditional browser or mobile based map applications. It is also inconvenient, hard to use and unintuitive. Despite these shortcomings, the test subjects saw future potential in the system and found it to be useful and fun to use. The findings suggest that even if the application is developed further, the experience as a whole would still be lacking as AR technology is not ready for mainstream adaptation quite yet

    Heads Up! Supporting Maritime Navigation using Augmented Reality

    Get PDF
    Augmented Reality (AR) is a technology that shows potential for the improvement of maritime safety. Today, the ship bridge suffers from a lack of standardization and integration. Head-Mounted Displays (HMDs) may alleviate these challenges by showing information when relevant and enhancing operator mobility. Microsoft HoloLens 2 (HL2) is such a HMD. Prior research shows the potential of HMDs in the Maritime AR domain (Rowen et al., 2019). Limited research has been conducted however on the design of AR User Interfaces (UIs) for maritime applications leveraging HMDs. As a result, no framework exists to test new UI designs in the real world, which is necessary due to many variables that cannot be accurately modelled in a lab setting. This led to the research questions (RQs) 1. What makes an effective head-mounted AR UI for maritime navigation? (RQ1); and 2. How can HL2 be used as a ship bridge system? (RQ2) A Research through Design (RtD) process is detailed where a UI design and functional prototype was developed in collaboration with end-users. The prototype, named Sjør, implements the aforementioned interface, provides a framework for in-context UI testing and can be viewed as the next step towards standardizing AR UIs for the maritime industry. The design and development process led to three contributions to the Maritime AR domain. Firstly, a framework for the visualization of location-based data about points of interest on predefined canvases co-located in the real world was developed (Technology Readiness Level (TRL) 6), which runs on the HL2. This first contribution is defined in Section 4 and provides an answer to RQ2. Secondly, using this framework, an interface design (including interactions) is developed in collaboration with end-users and proposed as an answer to RQ1. This process is described in Section 5. The third contribution is a research agenda which provides insights into how contemporary and future research can leverage the developed framework. Section 7 discloses this research agenda.Master's Thesis in Interaction and Media DesignMIX350MASV-MI

    An augmented reality sign-reading assistant for users with reduced vision

    Get PDF
    People typically rely heavily on visual information when finding their way to unfamiliar locations. For individuals with reduced vision, there are a variety of navigational tools available to assist with this task if needed. However, for wayfinding in unfamiliar indoor environments the applicability of existing tools is limited. One potential approach to assist with this task is to enhance visual information about the location and content of existing signage in the environment. With this aim, we developed a prototype software application, which runs on a consumer head-mounted augmented reality (AR) device, to assist visually impaired users with sign-reading. The sign-reading assistant identifies real-world text (e.g., signs and room numbers) on command, highlights the text location, converts it to high contrast AR lettering, and optionally reads the content aloud via text-to-speech. We assessed the usability of this application in a behavioral experiment. Participants with simulated visual impairment were asked to locate a particular office within a hallway, either with or without AR assistance (referred to as the AR group and control group, respectively). Subjective assessments indicated that participants in the AR group found the application helpful for this task, and an analysis of walking paths indicated that these participants took more direct routes compared to the control group. However, participants in the AR group also walked more slowly and took more time to complete the task than the control group. The results point to several specific future goals for usability and system performance in AR-based assistive tools.Peer reviewed: YesNRC publication: Ye

    Exploring the use of smart glasses, gesture control, and environmental data in augmented reality games

    Get PDF
    Abstract. In the last decade, augmented reality has become a popular trend. Big corporations like Microsoft, Facebook, and Google started to invest in augmented reality because they saw the potential that it has especially with the rising of the consumer version of the head mounted displays such as Microsoft’s HoloLens and the ODG’s R7. However, there is a gap in the knowledge about the interaction with such devices since they are fairly new and an average consumer cannot yet afford them due to their relatively high prices. In this thesis, the Ghost Hunters game is described. The game is a mobile augmented reality pervasive game that uses the environment light data to charge the in-game “goggles”. The game has two different versions, a smartphone and smart glasses version. The Ghost Hunters game was implemented for exploring the use of two different types of interactions methods, buttons and natural hand gestures for both smartphones and smart glasses. In addition to that, the thesis sought to explore the use of ambient light in augmented reality games. First, the thesis defines the essential concepts related to games and augmented reality based on the literature and then describes the current state of the art of pervasive games and smart glasses. Second, both the design and implementation of the Ghost Hunters game are described in detail. Afterwards, the three rounds of field trials that were conducted to investigate the suitability of the two previously mentioned interaction methods are described and discussed. The findings suggest that smart glasses are more immersive than smartphones in context of pervasive AR games. Moreover, prior AR experience has a significant positive impact on the immersion of smart glasses users. Similarly, males were more immersed in the game than females. Hand gestures were proven to be more usable than the buttons on both devices. However, the interaction method did not affect the game engagement at all, but surprisingly it did affect the way users perceive the UI with smart glasses. Users that used the physical buttons were more likely to notice the UI elements than the users who used the hand gestures
    corecore