8 research outputs found

    Improving the accessibility of digital documents for visually impaired users : Contributions of the Textual Architecture Model

    Get PDF
    International audienceThis paper presents a framework which aims at describing text formatting, based on a model coming from the field of logic and linguistics, the Textual Architecture Model [23]. The goal is to improve documents accessibility for blind users. The project will later focus on evaluating the efficiency of different navigation and content presentation strategies, based on this framework

    DOKY: A Multi-Modal User Interface for Non-Visual Presentation, Navigation and Manipulation of Structured Documents on Mobile and Wearable Devices

    Get PDF
    There are a large number of highly structured documents available on the Internet. The logical document structure is very important for the reader in order to efficiently handling the document content. In graphical user interfaces, each logical structure element is presented by a specific visualisation, a graphical icon. This representation allows visual readers to recognise the structure at a glance. Another advantage is that it enables direct navigation and manipulation. Blind and visually impaired persons are unable to use graphical user interfaces and for the emerging category of mobile and wearable devices, where there are only small visual displays available or no visual display at all, a non-visual alternative is required too. A multi-modal user interface for non-visual presentation, navigation and manipulation of structured documents on mobile and wearable devices like smart phones, smart watches or smart tablets has been developed as a result of inductive research among 205 blind and visually impaired participants. It enables the user to get a fast overview over the document structure and to efficiently skim and scan over the document content by identifying the type, level, position, length, relationship and content text of each element as well as to focus, select, activate, move, remove and insert structure elements or text. These interactions are presented in a non-visual way using Earcons, Tactons and synthetic speech utterances, serving the auditory and tactile human sense. Navigation and manipulation is provided by using the multitouch, motion (linear acceleration and rotation) or speech recognition input modality. It is a complete solution for reading, creating and editing structured documents in a non-visual way. There is no special hardware required. The name DOKY is derived from a short form of the terms document, and accessibility. A flexible platform-independent and event-driven software architecture implementing the DOKY user interface as well as the automated structured observation research method employed for the investigation into the effectiveness of the proposed user interface has been presented. Because it is platform- and language-neutral, it can be used in a wide variety of platforms, environments and applications for mobile and wearable devices. Each component is defined by interfaces and abstract classes only, so that it can be easily changed or extended, and grouped in a semantically self-containing package. An investigation into the effectiveness of the proposed DOKY user interface has been carried out to see whether the proposed user interface design concepts and user interaction design concepts are effective means for non-visual presentation, navigation and manipulation of structured documents on mobile and wearable devices, by automated structured observations of 876 blind and visually impaired research subjects performing 19 exercises among a highly structured example document using the DOKY Structured Observation App on their own mobile or wearable device remotely over the Internet. The results showed that the proposed user interface design concepts for presentation and navigation and the user interaction design concepts for manipulation are effective and that their effectiveness depends on the input modality and hardware device employed as well as on the use of screen readers

    Interactive maps for visually impaired people : design, usability and spatial cognition

    Get PDF
    Connaître la géographie de son environnement urbain est un enjeu important pour les personnes déficientes visuelles. Des cartes tactiles en relief sont généralement utilisées mais elles présentent des limitations importantes (nombre limité d'informations, recours à une légende braille). Les nouvelles technologies permettent d'envisager des solutions innovantes. Nous avons conçu et développé une carte interactive accessible, en suivant un processus de conception participative. Cette carte est basée sur un dispositif multi-touch, une carte tactile en relief et une sortie sonore. Ce dispositif permet au sujet de recueillir des informations en double-cliquant sur certains objets de la carte. Nous avons démontré expérimentalement que ce prototype était plus efficace et plus satisfaisant pour des utilisateurs déficients visuels qu'une carte tactile simple. Nous avons également exploré et testé différents types d'interactions avancées accessibles pour explorer la carte. Cette thèse démontre l'importance des cartes tactiles interactives pour les déficients visuels et leur cognition spatiale.Knowing the geography of an urban environment is crucial for visually impaired people. Tactile relief maps are generally used, but they retain significant limitations (limited amount of information, use of braille legend, etc.). Recent technological progress allows the development of innovative solutions which overcome these limitations. In this thesis, we present the design of an accessible interactive map through a participatory design process. This map is composed by a multi-touch screen with tactile map overlay and speech output. It provides auditory information when tapping on map elements. We have demonstrated in an experiment that our prototype was more effective and satisfactory for visually impaired users than a simple raised-line map. We also explored and tested different types of advanced non-visual interaction for exploring the map. This thesis demonstrates the importance of interactive tactile maps for visually impaired people and their spatial cognition

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers
    corecore