21 research outputs found

    Use of Augmented Reality in Human Wayfinding: A Systematic Review

    Full text link
    Augmented reality technology has emerged as a promising solution to assist with wayfinding difficulties, bridging the gap between obtaining navigational assistance and maintaining an awareness of one's real-world surroundings. This article presents a systematic review of research literature related to AR navigation technologies. An in-depth analysis of 65 salient studies was conducted, addressing four main research topics: 1) current state-of-the-art of AR navigational assistance technologies, 2) user experiences with these technologies, 3) the effect of AR on human wayfinding performance, and 4) impacts of AR on human navigational cognition. Notably, studies demonstrate that AR can decrease cognitive load and improve cognitive map development, in contrast to traditional guidance modalities. However, findings regarding wayfinding performance and user experience were mixed. Some studies suggest little impact of AR on improving outdoor navigational performance, and certain information modalities may be distracting and ineffective. This article discusses these nuances in detail, supporting the conclusion that AR holds great potential in enhancing wayfinding by providing enriched navigational cues, interactive experiences, and improved situational awareness.Comment: 52 page

    NavMarkAR: A Landmark-based Augmented Reality (AR) Wayfinding System for Enhancing Spatial Learning of Older Adults

    Full text link
    Wayfinding in complex indoor environments is often challenging for older adults due to declines in navigational and spatial-cognition abilities. This paper introduces NavMarkAR, an augmented reality navigation system designed for smart-glasses to provide landmark-based guidance, aiming to enhance older adults' spatial navigation skills. This work addresses a significant gap in design research, with limited prior studies evaluating cognitive impacts of AR navigation systems. An initial usability test involved 6 participants, leading to prototype refinements, followed by a comprehensive study with 32 participants in a university setting. Results indicate improved wayfinding efficiency and cognitive map accuracy when using NavMarkAR. Future research will explore long-term cognitive skill retention with such navigational aids.Comment: 24 page

    A HoloLens Application to Aid People who are Visually Impaired in Navigation Tasks

    Get PDF
    Day-to-day activities such as navigation and reading can be particularly challenging for people with visual impairments. Reading text on signs may be especially difficult for people who are visually impaired because signs have variable color, contrast, and size. Indoors, signage may include office, classroom, restroom, and fire evacuation signs. Outdoors, they may include street signs, bus numbers, and store signs. Depending on the level of visual impairment, just identifying where signs exist can be a challenge. Using Microsoft\u27s HoloLens, an augmented reality device, I designed and implemented the TextSpotting application that helps those with low vision identify and read indoor signs so that they can navigate text-heavy environments. The application can provide both visual information and auditory information. In addition to developing the application, I conducted a user study to test its effectiveness. Participants were asked to find a room in an unfamiliar hallway. Those that used the TextSpotting application completed the task less quickly yet reported higher levels of ease, comfort, and confidence, indicating the application\u27s limitations and potential in providing an effective means to navigate unknown environments via signage

    Smart Assistive Technology for People with Visual Field Loss

    Get PDF
    Visual field loss results in the lack of ability to clearly see objects in the surrounding environment, which affects the ability to determine potential hazards. In visual field loss, parts of the visual field are impaired to varying degrees, while other parts may remain healthy. This defect can be debilitating, making daily life activities very stressful. Unlike blind people, people with visual field loss retain some functional vision. It would be beneficial to intelligently augment this vision by adding computer-generated information to increase the users' awareness of possible hazards by providing early notifications. This thesis introduces a smart hazard attention system to help visual field impaired people with their navigation using smart glasses and a real-time hazard classification system. This takes the form of a novel, customised, machine learning-based hazard classification system that can be integrated into wearable assistive technology such as smart glasses. The proposed solution provides early notifications based on (1) the visual status of the user and (2) the motion status of the detected object. The presented technology can detect multiple objects at the same time and classify them into different hazard types. The system design in this work consists of four modules: (1) a deep learning-based object detector to recognise static and moving objects in real-time, (2) a Kalman Filter-based multi-object tracker to track the detected objects over time to determine their motion model, (3) a Neural Network-based classifier to determine the level of danger for each hazard using its motion features extracted while the object is in the user's field of vision, and (4) a feedback generation module to translate the hazard level into a smart notification to increase user's cognitive perception using the healthy vision within the visual field. For qualitative system testing, normal and personalised defected vision models were implemented. The personalised defected vision model was created to synthesise the visual function for the people with visual field defects. Actual central and full-field test results were used to create a personalised model that is used in the feedback generation stage of this system, where the visual notifications are displayed in the user's healthy visual area. The proposed solution will enhance the quality of life for people suffering from visual field loss conditions. This non-intrusive, wearable hazard detection technology can provide obstacle avoidance solution, and prevent falls and collisions early with minimal information

    Evaluating Context-Aware Applications Accessed Through Wearable Devices as Assistive Technology for Students with Disabilities

    Get PDF
    The purpose of these two single subject design studies was to evaluate the use of the wearable and context-aware technologies for college students with intellectual disability and autism as tools to increase independence and vocational skills. There is a compelling need for the development of tools and strategies that will facilitate independence, self-sufficiency, and address poor outcomes in adulthood for students with disabilities. Technology is considered to be a great equalizer for people with disabilities. The proliferation of new technologies allows access to real-time, contextually-based information as a means to compensate for limitations in cognitive functioning and decrease the complexity of prerequisite skills for successful use of previous technologies. Six students participated in two single-subject design studies; three students participate in Study I and three different students participated in Study II. The results of these studies are discussed in the context applying new technology applications to assist and improve individuals with intellectual disability and autism to self-manage technological supports to learn new skills, set reminders, and enhance independence. During Study I, students were successfully taught to use a wearable smartglasses device, which delivered digital auditory and visual information to complete three novel vocational tasks. The results indicated that all students learned all vocational task using the wearable device. Students also continued to use the device beyond the initial training phase to self-direct their learning and self-manage prompts for task completion as needed. During Study II, students were successfully taught to use a wearable smartwatch device to enter novel appointments for the coming week, as well as complete the tasks associated with each appointment. The results indicated that all students were able to self-operate the wearable device to enter appointments, attend all appointments on-time and complete all associated tasks

    The Visual Guidance of Dance Images in Humanities Documentaries

    Get PDF
    Do communicators' emotional expressions in dance-themed visual expressions in humanities documentaries depend on the visual guidance of dance camera language? There is limited information related to humanities documentaries with dance as a theme in the literature. The purpose of this study was to examine the effect of the creator's visual guidance on an audience's perception of dance images. Based on a humanities documentary’s overall tone, the communicator selected appropriate dance sequences and dance clips and recorded them from a dancer's point of view to capture an image's moral expression and bring the humanities documentary to a climax. The study explored whether an audience’s perspective of visual information from a dance in which the dance body language is guided by a video varies from that perceived when watching the dance in the past. This analysis opens up new avenues for video display and dance representation for the expression of dance video, to satisfy an audience's sense of observation and communicate an emotional expression

    A Systematic Review of Extended Reality (XR) for Understanding and Augmenting Vision Loss

    Full text link
    Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on augmentation of a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the last decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the suitability and usability of different XR-based accessibility aids. By broadening end-user participation to early stages of the design process and shifting the focus from behavioral performance to qualitative assessments of usability, future research has the potential to develop XR technologies that may not only allow for studying vision loss, but also enable novel visual accessibility aids with the potential to impact the lives of millions of people living with vision loss

    Accessible Autonomy: Exploring Inclusive Autonomous Vehicle Design and Interaction for People who are Blind and Visually Impaired

    Get PDF
    Autonomous vehicles are poised to revolutionize independent travel for millions of people experiencing transportation-limiting visual impairments worldwide. However, the current trajectory of automotive technology is rife with roadblocks to accessible interaction and inclusion for this demographic. Inaccessible (visually dependent) interfaces and lack of information access throughout the trip are surmountable, yet nevertheless critical barriers to this potentially lifechanging technology. To address these challenges, the programmatic dissertation research presented here includes ten studies, three published papers, and three submitted papers in high impact outlets that together address accessibility across the complete trip of transportation. The first paper began with a thorough review of the fully autonomous vehicle (FAV) and blind and visually impaired (BVI) literature, as well as the underlying policy landscape. Results guided prejourney ridesharing needs among BVI users, which were addressed in paper two via a survey with (n=90) transit service drivers, interviews with (n=12) BVI users, and prototype design evaluations with (n=6) users, all contributing to the Autonomous Vehicle Assistant: an award-winning and accessible ridesharing app. A subsequent study with (n=12) users, presented in paper three, focused on prejourney mapping to provide critical information access in future FAVs. Accessible in-vehicle interactions were explored in the fourth paper through a survey with (n=187) BVI users. Results prioritized nonvisual information about the trip and indicated the importance of situational awareness. This effort informed the design and evaluation of an ultrasonic haptic HMI intended to promote situational awareness with (n=14) participants (paper five), leading to a novel gestural-audio interface with (n=23) users (paper six). Strong support from users across these studies suggested positive outcomes in pursuit of actionable situational awareness and control. Cumulative results from this dissertation research program represent, to our knowledge, the single most comprehensive approach to FAV BVI accessibility to date. By considering both pre-journey and in-vehicle accessibility, results pave the way for autonomous driving experiences that enable meaningful interaction for BVI users across the complete trip of transportation. This new mode of accessible travel is predicted to transform independent travel for millions of people with visual impairment, leading to increased independence, mobility, and quality of life

    GazePrompt: Enhancing Low Vision People's Reading Experience with Gaze-Aware Augmentations

    Full text link
    Reading is a challenging task for low vision people. While conventional low vision aids (e.g., magnification) offer certain support, they cannot fully address the difficulties faced by low vision users, such as locating the next line and distinguishing similar words. To fill this gap, we present GazePrompt, a gaze-aware reading aid that provides timely and targeted visual and audio augmentations based on users' gaze behaviors. GazePrompt includes two key features: (1) a Line-Switching support that highlights the line a reader intends to read; and (2) a Difficult-Word support that magnifies or reads aloud a word that the reader hesitates with. Through a study with 13 low vision participants who performed well-controlled reading-aloud tasks with and without GazePrompt, we found that GazePrompt significantly reduced participants' line switching time, reduced word recognition errors, and improved their subjective reading experiences. A follow-up silent-reading study showed that GazePrompt can enhance users' concentration and perceived comprehension of the reading contents. We further derive design considerations for future gaze-based low vision aids

    Toward multimodality: gesture and vibrotactile feedback in natural human computer interaction

    Get PDF
    In the present work, users’ interaction with advanced systems has been investigated in different application domains and with respect to different interfaces. The methods employed were carefully devised to respond to the peculiarities of the interfaces under examination. We could extract a set of recommendations for developers. The first application domain examined regards the home. In particular, we addressed the design of a gestural interface for controlling a lighting system embedded into a piece of furniture in the kitchen. A sample of end users was observed while interacting with the virtual simulation of the interface. Based on the videoanalysis of users’ spontaneous behaviors, we could derive a set of significant interaction trends The second application domain involved the exploration of an urban environment in mobility. In a comparative study, a haptic-audio interface and an audio-visual interface were employed for guiding users towards landmarks and for providing them with information. We showed that the two systems were equally efficient in supporting the users and they were both well- received by them. In a navigational task we compared two tactile displays each embedded in a different wearable device, i.e., a glove and a vest. Despite the differences in the shape and size, both systems successfully directed users to the target. The strengths and the flaws of the two devices were pointed out and commented by users. In a similar context, two devices supported Augmented Reality technology, i.e., a pair of smartglasses and a smartphone, were compared. The experiment allowed us to identify the circumstances favoring the use of smartglasses or the smartphone. Considered altogether, our findings suggest a set of recommendations for developers of advanced systems. First, we outline the importance of properly involving end users for unveiling intuitive interaction modalities with gestural interfaces. We also highlight the importance of providing the user the chance to choose the interaction mode better fitting the contextual characteristics and to adjust the features of every interaction mode. Finally, we outline the potential of wearable devices to support interactions on the move and the importance of finding a proper balance between the amount of information conveyed to the user and the size of the device
    corecore