2,340 research outputs found
Hacking Blind Navigation
Independent navigation in unfamiliar and complex environments is a major challenge for blind people. This challenge motivates a multi-disciplinary effort in the CHI community aimed at developing assistive technologies to support the orientation and mobility of blind people, including related disciplines such as accessible computing, cognitive sciences, computer vision, and ubiquitous computing. This workshop intends to bring these communities together to increase awareness on recent advances in blind navigation assistive technologies, benefit from diverse perspectives and expertises, discuss open research challenges, and explore avenues for multi-disciplinary collaborations. Interactions are fostered through a panel on Open Challenges and Avenues for Interdisciplinary Collaboration, Minute-Madness presentations, and a Hands-On Session where workshop participants can hack (design or prototype) new solutions to tackle open research challenges. An expected outcome is the emergence of new collaborations and research directions that can result in novel assistive technologies to support independent blind navigation
Modeling Expertise in Assistive Navigation Interfaces for Blind People
Evaluating the impact of expertise and route knowledge on task performance can guide the design of intelligent and adaptive navigation interfaces. Expertise has been relatively unexplored in the context of assistive indoor navigation interfaces for blind people. To quantify the complex relationship between the user's walking patterns, route learning, and adaptation to the interface, we conducted a study with 8 blind participants. The participants repeated a set of navigation tasks while using a smartphone-based turn-by-turn navigation guidance app. The results demonstrate the gradual evolution of user skill and knowledge throughout the route repetitions, significantly impacting the task completion time. In addition to the exploratory analysis, we take a step towards tailoring the navigation interface to the user's needs by proposing a personalized recurrent neural net work-based behavior model for expertise level classification
SLAM for Visually Impaired People: A Survey
In recent decades, several assistive technologies for visually impaired and
blind (VIB) people have been developed to improve their ability to navigate
independently and safely. At the same time, simultaneous localization and
mapping (SLAM) techniques have become sufficiently robust and efficient to be
adopted in the development of assistive technologies. In this paper, we first
report the results of an anonymous survey conducted with VIB people to
understand their experience and needs; we focus on digital assistive
technologies that help them with indoor and outdoor navigation. Then, we
present a literature review of assistive technologies based on SLAM. We discuss
proposed approaches and indicate their pros and cons. We conclude by presenting
future opportunities and challenges in this domain.Comment: 26 pages, 5 tables, 3 figure
Airport Accessibility and Navigation Assistance for People with Visual Impairments
People with visual impairments often have to rely on the assistance of sighted guides in airports, which prevents them from having an independent travel experience. In order to learn about their perspectives on current airport accessibility, we conducted two focus groups that discussed their needs and experiences in-depth, as well as the potential role of assistive technologies. We found that independent navigation is a main challenge and severely impacts their overall experience. As a result, we equipped an airport with a Bluetooth Low Energy (BLE) beacon-based navigation system and performed a real-world study where users navigated routes relevant for their travel experience. We found that despite the challenging environment participants were able to complete their itinerary independently, presenting none to few navigation errors and reasonable timings. This study presents the first systematic evaluation posing BLE technology as a strong approach to increase the independence of visually impaired people in airports
Recommended from our members
MULTI-SENSOR LOCALIZATION AND TRACKING IN DISASTER MANAGEMENT AND INDOOR WAYFINDING FOR VISUALLY IMPAIRED USERS
This dissertation proposes a series of multi-sensor localization and tracking algorithms particularly developed for two important application domains, which are disaster management and indoor wayfinding for blind and visually impaired (BVI) users. For disaster management, we developed two different localization algorithms, one each for Radio Frequency Identification (RFID) and Bluetooth Low Energy (BLE) technology, which enable the disaster management system to track patients in real-time. Both algorithms work in the absence of any pre-deployed infrastructure along with smartphones and wearable devices. Regarding indoor wayfinding for BVI users, we have explored several types of indoor positioning techniques including BLE-based, inertial, visual and hybrid approaches to offer accurate and reliable location and orientation in complex navigation spaces. In this dissertation, significant contributions have been made in the design and implementation of various localization and tracking algorithms under different requirements of certain applications
An augmented reality sign-reading assistant for users with reduced vision
People typically rely heavily on visual information when finding their way to unfamiliar locations. For individuals with reduced vision, there are a variety of navigational tools available to assist with this task if needed. However, for wayfinding in unfamiliar indoor environments the applicability of existing tools is limited. One potential approach to assist with this task is to enhance visual information about the location and content of existing signage in the environment. With this aim, we developed a prototype software application, which runs on a consumer head-mounted augmented reality (AR) device, to assist visually impaired users with sign-reading. The sign-reading assistant identifies real-world text (e.g., signs and room numbers) on command, highlights the text location, converts it to high contrast AR lettering, and optionally reads the content aloud via text-to-speech. We assessed the usability of this application in a behavioral experiment. Participants with simulated visual impairment were asked to locate a particular office within a hallway, either with or without AR assistance (referred to as the AR group and control group, respectively). Subjective assessments indicated that participants in the AR group found the application helpful for this task, and an analysis of walking paths indicated that these participants took more direct routes compared to the control group. However, participants in the AR group also walked more slowly and took more time to complete the task than the control group. The results point to several specific future goals for usability and system performance in AR-based assistive tools.Peer reviewed: YesNRC publication: Ye
Sample-Efficient Training of Robotic Guide Using Human Path Prediction Network
Training a robot that engages with people is challenging, because it is
expensive to involve people in a robot training process requiring numerous data
samples. This paper proposes a human path prediction network (HPPN) and an
evolution strategy-based robot training method using virtual human movements
generated by the HPPN, which compensates for this sample inefficiency problem.
We applied the proposed method to the training of a robotic guide for visually
impaired people, which was designed to collect multimodal human response data
and reflect such data when selecting the robot's actions. We collected 1,507
real-world episodes for training the HPPN and then generated over 100,000
virtual episodes for training the robot policy. User test results indicate that
our trained robot accurately guides blindfolded participants along a goal path.
In addition, by the designed reward to pursue both guidance accuracy and human
comfort during the robot policy training process, our robot leads to improved
smoothness in human motion while maintaining the accuracy of the guidance. This
sample-efficient training method is expected to be widely applicable to all
robots and computing machinery that physically interact with humans
Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
A major challenge to deploying robots widely is navigation in human-populated
environments, commonly referred to as social robot navigation. While the field
of social navigation has advanced tremendously in recent years, the fair
evaluation of algorithms that tackle social navigation remains hard because it
involves not just robotic agents moving in static environments but also dynamic
human agents and their perceptions of the appropriateness of robot behavior. In
contrast, clear, repeatable, and accessible benchmarks have accelerated
progress in fields like computer vision, natural language processing and
traditional robot navigation by enabling researchers to fairly compare
algorithms, revealing limitations of existing solutions and illuminating
promising new directions. We believe the same approach can benefit social
navigation. In this paper, we pave the road towards common, widely accessible,
and repeatable benchmarking criteria to evaluate social robot navigation. Our
contributions include (a) a definition of a socially navigating robot as one
that respects the principles of safety, comfort, legibility, politeness, social
competency, agent understanding, proactivity, and responsiveness to context,
(b) guidelines for the use of metrics, development of scenarios, benchmarks,
datasets, and simulators to evaluate social navigation, and (c) a design of a
social navigation metrics framework to make it easier to compare results from
different simulators, robots and datasets.Comment: 43 pages, 11 figures, 6 table
- …