2,737 research outputs found

    Exploring haptic interfacing with a mobile robot without visual feedback

    Get PDF
    Search and rescue scenarios are often complicated by low or no visibility conditions. The lack of visual feedback hampers orientation and causes significant stress for human rescue workers. The Guardians project [1] pioneered a group of autonomous mobile robots assisting a human rescue worker operating within close range. Trials were held with fire fighters of South Yorkshire Fire and Rescue. It became clear that the subjects by no means were prepared to give up their procedural routine and the feel of security they provide: they simply ignored instructions that contradicted their routines

    Inventory of ATT system requirements for elderly and disabled drivers and travellers

    Get PDF
    This Inventory of ATT System Requirements for Elderly and Disabled Drivers and Travellers is the product of the TELSCAN project’s Workpackage 3: Identification and Updating of User Requirements of Elderly and Disabled Travellers. It describes the methods and tools used to identify the needs of elderly and disabled (E&D) travellers. The result of this investigation is a summary of the requirements of elderly and disabled travellers using different modes of transport, including private cars, buses/trams, metros/trains, ships and airplanes. It provides a generic user requirements specification which can guide the design of all transport telematics systems. However, it is important to stress that projects should also capture a more detailed definition of user requirements for their specific application area or system

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    Accessible Autonomy: Exploring Inclusive Autonomous Vehicle Design and Interaction for People who are Blind and Visually Impaired

    Get PDF
    Autonomous vehicles are poised to revolutionize independent travel for millions of people experiencing transportation-limiting visual impairments worldwide. However, the current trajectory of automotive technology is rife with roadblocks to accessible interaction and inclusion for this demographic. Inaccessible (visually dependent) interfaces and lack of information access throughout the trip are surmountable, yet nevertheless critical barriers to this potentially lifechanging technology. To address these challenges, the programmatic dissertation research presented here includes ten studies, three published papers, and three submitted papers in high impact outlets that together address accessibility across the complete trip of transportation. The first paper began with a thorough review of the fully autonomous vehicle (FAV) and blind and visually impaired (BVI) literature, as well as the underlying policy landscape. Results guided prejourney ridesharing needs among BVI users, which were addressed in paper two via a survey with (n=90) transit service drivers, interviews with (n=12) BVI users, and prototype design evaluations with (n=6) users, all contributing to the Autonomous Vehicle Assistant: an award-winning and accessible ridesharing app. A subsequent study with (n=12) users, presented in paper three, focused on prejourney mapping to provide critical information access in future FAVs. Accessible in-vehicle interactions were explored in the fourth paper through a survey with (n=187) BVI users. Results prioritized nonvisual information about the trip and indicated the importance of situational awareness. This effort informed the design and evaluation of an ultrasonic haptic HMI intended to promote situational awareness with (n=14) participants (paper five), leading to a novel gestural-audio interface with (n=23) users (paper six). Strong support from users across these studies suggested positive outcomes in pursuit of actionable situational awareness and control. Cumulative results from this dissertation research program represent, to our knowledge, the single most comprehensive approach to FAV BVI accessibility to date. By considering both pre-journey and in-vehicle accessibility, results pave the way for autonomous driving experiences that enable meaningful interaction for BVI users across the complete trip of transportation. This new mode of accessible travel is predicted to transform independent travel for millions of people with visual impairment, leading to increased independence, mobility, and quality of life

    Following a Robot using a Haptic Interface without Visual Feedback

    Get PDF
    Search and rescue operations are often undertaken in dark and noisy environments in which rescue teams must rely on haptic feedback for navigation and safe exit. In this paper, we discuss designing and evaluating a haptic interface to enable a human being to follow a robot through an environment with no-visibility. We first briefly analyse the task at hand and discuss the considerations that have led to our current interface design. The second part of the paper describes our testing procedure and the results of our first informal tests. Based on these results we discuss future improvements of our design

    The Role of Haptics in Games

    Get PDF

    Designing multimodal interaction for the visually impaired

    Get PDF
    Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access. This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination. Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices. Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance. In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction. The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks

    State of the art review on walking support system for visually impaired people

    Get PDF
    The technology for terrain detection and walking support system for blind people has rapidly been improved the last couple of decades but to assist visually impaired people may have started long ago. Currently, a variety of portable or wearable navigation system is available in the market to help the blind for navigating their way in his local or remote area. The focused category in this work can be subgroups as electronic travel aids (ETAs), electronic orientation aids (EOAs) and position locator devices (PLDs). However, we will focus mainly on electronic travel aids (ETAs). This paper presents a comparative survey among the various portable or wearable walking support systems as well as informative description (a subcategory of ETAs or early stages of ETAs) with its working principal advantages and disadvantages so that the researchers can easily get the current stage of assisting blind technology along with the requirement for optimising the design of walking support system for its users

    A method to provide accessibility for visual components to vision impaired

    Get PDF
    Non-textual graphical information (line graphs, bar charts, pie charts, etc.) are increasingly pervasive in digital scientific literature and business reports which enabling readers to easily acquire the nature of the underlying information . These graphical components are commonly used to present data in an easy-to interpret way. Graphs are frequently used in economics, mathematics and other scientific subjects. In general term data visualization techniques are useless for blind people. Being unable to access graphical information easily is a major obstacle to blind people in pursuing a scientific study and careers .This paper suggests a method to extract implicit information of Bar chart, Pie chart, Line chart and math’s graph components of an electronic document and present them to vision impaired users in audio format. The goal is to provide simple to use, efficient, and available presentation schemes for non textual which can help vision impaired users in comprehending form without needing any further devices or equipments. A software application has been developed based on this research. The output of application is a textual summary of the graphic including the core content of the hypothesized intended message of the graphic designer. The textual summary of the graphic is then conveyed to the user by Text to Speech software .The benefit of this approach is automatic providing the user with the message and knowledge that one would gain from viewing t
    corecore