24,013 research outputs found

    Feeling what you hear: tactile feedback for navigation of audio graphs

    Get PDF
    Access to digitally stored numerical data is currently very limited for sight impaired people. Graphs and visualizations are often used to analyze relationships between numerical data, but the current methods of accessing them are highly visually mediated. Representing data using audio feedback is a common method of making data more accessible, but methods of navigating and accessing the data are often serial in nature and laborious. Tactile or haptic displays could be used to provide additional feedback to support a point-and-click type interaction for the visually impaired. A requirements capture conducted with sight impaired computer users produced a review of current accessibility technologies, and guidelines were extracted for using tactile feedback to aid navigation. The results of a qualitative evaluation with a prototype interface are also presented. Providing an absolute position input device and tactile feedback allowed the users to explore the graph using tactile and proprioceptive cues in a manner analogous to point-and-click techniques

    SeeReader: An (Almost) Eyes-Free Mobile Rich Document Viewer

    Get PDF
    Reading documents on mobile devices is challenging. Not only are screens small and difficult to read, but also navigating an environment using limited visual attention can be difficult and potentially dangerous. Reading content aloud using text-to-speech (TTS) processing can mitigate these problems, but only for content that does not include rich visual information. In this paper, we introduce a new technique, SeeReader, that combines TTS with automatic content recognition and document presentation control that allows users to listen to documents while also being notified of important visual content. Together, these services allow users to read rich documents on mobile devices while maintaining awareness of their visual environment

    Repurposing Visual Input Modalities for Blind Users: A Case Study of Word Processors

    Get PDF
    Visual \u27point-and-click\u27 interaction artifacts such as mouse and touchpad are tangible input modalities, which are essential for sighted users to conveniently interact with computer applications. In contrast, blind users are unable to leverage these visual input modalities and are thus limited while interacting with computers using a sequentially narrating screen-reader assistive technology that is coupled to keyboards. As a consequence, blind users generally require significantly more time and effort to do even simple application tasks (e.g., applying a style to text in a word processor) using only keyboard, compared to their sighted peers who can effortlessly accomplish the same tasks using a point-and-click mouse. This paper explores the idea of repurposing visual input modalities for non-visual interaction so that blind users too can draw the benefits of simple and efficient access from these modalities. Specifically, with word processing applications as the representative case study, we designed and developed NVMouse as a concrete manifestation of this repurposing idea, in which the spatially distributed word-processor controls are mapped to a virtual hierarchical \u27Feature Menu\u27 that is easily traversable non-visually using simple scroll and click input actions. Furthermore, NVMouse enhances the efficiency of accessing frequently-used application commands by leveraging a data-driven prediction model that can determine what commands the user will most likely access next, given the current \u27local\u27 screen-reader context in the document. A user study with 14 blind participants comparing keyboard-based screen readers with NVMouse, showed that the latter significantly reduced both the task-completion times and user effort (i.e., number of user actions) for different word-processing activities

    The design of sonically-enhanced widgets

    Get PDF
    This paper describes the design of user-interface widgets that include non-speech sound. Previous research has shown that the addition of sound can improve the usability of human–computer interfaces. However, there is little research to show where the best places are to add sound to improve usability. The approach described here is to integrate sound into widgets, the basic components of the human–computer interface. An overall structure for the integration of sound is presented. There are many problems with current graphical widgets and many of these are difficult to correct by using more graphics. This paper presents many of the standard graphical widgets and describes how sound can be added. It describes in detail usability problems with the widgets and then the non-speech sounds to overcome them. The non-speech sounds used are earcons. These sonically-enhanced widgets allow designers who are not sound experts to create interfaces that effectively improve usability and have coherent and consistent sounds

    Multimodal Accessibility of Documents

    Get PDF

    Rotate-and-Press: A Non-Visual Alternative to Point-and-Click

    Get PDF
    Most computer applications manifest visually rich and dense graphical user interfaces (GUIs) that are primarily tailored for an easy-and-efficient sighted interaction using a combination of two default input modalities, namely the keyboard and the mouse/touchpad. However, blind screen-reader users predominantly rely only on keyboard, and therefore struggle to interact with these applications, since it is both arduous and tedious to perform the visual \u27point-and-click\u27 tasks such as accessing the various application commands/features using just keyboard shortcuts supported by screen readers. In this paper, we investigate the suitability of a \u27rotate-and-press\u27 input modality as an effective non-visual substitute for the visual mouse to easily interact with computer applications, with specific focus on word processing applications serving as the representative case study. In this regard, we designed and developed bTunes, an add-on for Microsoft Word that customizes an off-the-shelf Dial input device such that it serves as a surrogate mouse for blind screen-reader users to quickly access various application commands and features using a set of simple rotate and press gestures supported by the Dial. Therefore, with bTunes, blind users too can now enjoy the benefits of two input modalities, as their sighted counterparts. A user study with 15 blind participants revealed that bTunes significantly reduced both the time and number of user actions for doing representative tasks in a word processing application, by as much as 65.1% and 36.09% respectively. The participants also stated that they did not face any issues switching between keyboard and Dial, and furthermore gave a high usability rating (84.66 avg. SUS score) for bTunes
    • …
    corecore