244 research outputs found

    Accessible Autonomy: Exploring Inclusive Autonomous Vehicle Design and Interaction for People who are Blind and Visually Impaired

    Get PDF
    Autonomous vehicles are poised to revolutionize independent travel for millions of people experiencing transportation-limiting visual impairments worldwide. However, the current trajectory of automotive technology is rife with roadblocks to accessible interaction and inclusion for this demographic. Inaccessible (visually dependent) interfaces and lack of information access throughout the trip are surmountable, yet nevertheless critical barriers to this potentially lifechanging technology. To address these challenges, the programmatic dissertation research presented here includes ten studies, three published papers, and three submitted papers in high impact outlets that together address accessibility across the complete trip of transportation. The first paper began with a thorough review of the fully autonomous vehicle (FAV) and blind and visually impaired (BVI) literature, as well as the underlying policy landscape. Results guided prejourney ridesharing needs among BVI users, which were addressed in paper two via a survey with (n=90) transit service drivers, interviews with (n=12) BVI users, and prototype design evaluations with (n=6) users, all contributing to the Autonomous Vehicle Assistant: an award-winning and accessible ridesharing app. A subsequent study with (n=12) users, presented in paper three, focused on prejourney mapping to provide critical information access in future FAVs. Accessible in-vehicle interactions were explored in the fourth paper through a survey with (n=187) BVI users. Results prioritized nonvisual information about the trip and indicated the importance of situational awareness. This effort informed the design and evaluation of an ultrasonic haptic HMI intended to promote situational awareness with (n=14) participants (paper five), leading to a novel gestural-audio interface with (n=23) users (paper six). Strong support from users across these studies suggested positive outcomes in pursuit of actionable situational awareness and control. Cumulative results from this dissertation research program represent, to our knowledge, the single most comprehensive approach to FAV BVI accessibility to date. By considering both pre-journey and in-vehicle accessibility, results pave the way for autonomous driving experiences that enable meaningful interaction for BVI users across the complete trip of transportation. This new mode of accessible travel is predicted to transform independent travel for millions of people with visual impairment, leading to increased independence, mobility, and quality of life

    Designing Accessible Nonvisual Maps

    Get PDF
    Access to nonvisual maps has long required special equipment and training to use; Google Maps, ESRI, and other commonly used digital maps are completely visual and thus inaccessible to people with visual impairments. This project presents the design and evaluation of an easy to use digital auditory map and 3D model interactive map. A co-design was also undertaken to discover tools for an ideal nonvisual navigational experience. Baseline results of both studies are presented so future work can improve on the designs. The user evaluation revealed that both prototypes were moderately easy to use. An ideal nonvisual navigational experience, according to these participants, consists of both an accurate turn by turn navigational system, and an interactive map. Future work needs to focus on the development of appropriate tools to enable this ideal experience

    CDI-Type II: Collaborative Research: Cyber Enhancement of Spatial Cognition for the Visually Impaired

    Get PDF
    Wayfinding is an essential capability for any person who wishes to have an independent life-style. It requires successful execution of several tasks including navigation and object and place recognition, all of which necessitate accurate assessment of the surrounding environment. For a visually-impaired person these tasks may be exceedingly difficult to accomplish and there are risks associated with failure in any of these. Guide dogs and white canes are widely used for the purpose of navigation and environment sensing, respectively. The former, however, has costly and often prohibitive training requirements, while the latter can only provide cues about obstacles in one\u27s surroundings. Human performance on visual information dependent tasks can be improved by sensing which provides information and environmental cues, such as position, orientation, local geometry, object description, via the use of appropriate sensors and sensor fusion algorithms. Most work on wayfinding aids has focused on outdoor environments and has led to the development of speech-enabled GPS-based navigation systems that provide information describing streets, addresses and points of interest. In contrast, the limited technology that is available for indoor navigation requires significant modification to the building infrastructure, whose high cost has prevented its wide use. This proposal adopts a multi-faceted approach for solving the indoor navigation problem for people with limited vision. It leverages expertise from robotics, computer vision, and blind spatial cognition with behavioral studies on interface design to guide the discovery of information requirements and optimal delivery methods for an indoor navigation system. Designing perception and navigation algorithms, implemented on miniature-size commercially-available hardware, while explicitly considering the spatial cognition capabilities of the visually impaired, will lead to the development of indoor navigation systems that will assist blind people in their wayfinding tasks while facilitating cognitive-map development

    Head-mounted Sensory Augmentation Device: Comparing Haptic and Audio Modality

    Get PDF
    This paper investigates and compares the effectiveness of haptic and audio modality for navigation in low visibility environment using a sensory augmentation device. A second generation head-mounted vibrotactile interface as a sensory augmentation prototype was developed to help users to navigate in such environments. In our experiment, a subject navigates along a wall relying on the haptic or audio feedbacks as navigation commands. Haptic/audio feedback is presented to the subjects according to the information measured from the walls to a set of 12 ultrasound sensors placed around a helmet and a classification algorithm by using multilayer perceptron neural network. Results showed the haptic modality leads to significantly lower route deviation in navigation compared to auditory feedback. Furthermore, the NASA TLX questionnaire showed that subjects reported lower cognitive workload with haptic modality although both modalities were able to navigate the users along the wall

    Virtual reality systems for rodents

    Get PDF
    Over the last decade virtual reality ( VR) setups for rodents have been developed and utilized to investigate the neural foundations of behavior. Such VR systems became very popular since they allow the use of state-of-the-art techniques to measure neural activity in behaving rodents that cannot be easily used with classical behavior setups. Here, we provide an overview of rodent VR technologies and review recent results from related research. We discuss commonalities and differences as well as merits and issues of different approaches. A special focus is given to experimental ( behavioral) paradigms in use. Finally we comment on possible use cases that may further exploit the potential of VR in rodent research and hence inspire future studies

    COVID-19 and Visual Disability: Can’t Look and Now Don’t Touch

    Get PDF
    Article provides a scientific explanation for pandemic-related challenges blind and visually impaired (BVI) people experience. These challenges include spatial cognition, nonvisual information access, and environmental perception. Also offers promising technical solutions for the above challenges

    Auditory navigation with a tubular acoustic model for interactive distance cues and personalized head-related transfer functions: an auditory target-reaching task

    Get PDF
    This paper presents a novel spatial auditory display that combines a virtual environment based on a Digital Waveguide Mesh (DWM) model of a small tubular shape with a binaural rendering system with personalized head-related transfer functions (HRTFs) allowing interactive selection of absolute 3D spatial cues of direction as well as egocentric distance. The tube metaphor in particular minimizes loudness changes with distance, providing mainly direct-to-reverberant and spectral cues. The proposed display was assessed through a target-reaching task where participants explore a 2D virtual map with a pen tablet and hit a sound source (the target) using auditory information only; subjective time to hit and traveled distance were analyzed for three experiments. The first one aimed at assessing the proposed HRTF selection method for personalization and dimensionality of the reaching task, with particular attention to elevation perception; we showed that most subjects performed better when they had to reach a vertically unbounded (2D) rather then an elevated (3D) target. The second experiment analyzed interaction between the tube metaphor and HRTF showing a dominant effect of DWM model over binaural rendering. In the last experiment, participants using absolute distance cues from the tube model performed comparably well to when they could rely on more robust, although relative, intensity cues. These results suggest that participants made proficient use of both binaural and reverberation cues during the task, displayed as part of a coherent 3D sound model, in spite of the known complexity of use of both such cues. HRTF personalization was beneficial for participants who were able to perceive vertical dimension of a virtual sound. Further work is needed to add full physical consistency to the proposed auditory display

    Augmenting the Spatial Perception Capabilities of Users Who Are Blind

    Get PDF
    People who are blind face a series of challenges and limitations resulting from their lack of being able to see, forcing them to either seek the assistance of a sighted individual or work around the challenge by way of a inefficient adaptation (e.g. following the walls in a room in order to reach a door rather than walking in a straight line to the door). These challenges are directly related to blind users' lack of the spatial perception capabilities normally provided by the human vision system. In order to overcome these spatial perception related challenges, modern technologies can be used to convey spatial perception data through sensory substitution interfaces. This work is the culmination of several projects which address varying spatial perception problems for blind users. First we consider the development of non-visual natural user interfaces for interacting with large displays. This work explores the haptic interaction space in order to find useful and efficient haptic encodings for the spatial layout of items on large displays. Multiple interaction techniques are presented which build on prior research (Folmer et al. 2012), and the efficiency and usability of the most efficient of these encodings is evaluated with blind children. Next we evaluate the use of wearable technology in aiding navigation of blind individuals through large open spaces lacking tactile landmarks used during traditional white cane navigation. We explore the design of a computer vision application with an unobtrusive aural interface to minimize veering of the user while crossing a large open space. Together, these projects represent an exploration into the use of modern technology in augmenting the spatial perception capabilities of blind users
    • …
    corecore