624 research outputs found

    Sonically-enhanced widgets: comments on Brewster and Clarke, ICAD 1997

    Get PDF
    This paper presents a review of the research surrounding the paper “The Design and Evaluation of a Sonically Enhanced Tool Palette” by Brewster and Clarke from ICAD 1997. A historical perspective is given followed by a discussion of how this work has fed into current developments in the area

    The design of sonically-enhanced widgets

    Get PDF
    This paper describes the design of user-interface widgets that include non-speech sound. Previous research has shown that the addition of sound can improve the usability of human–computer interfaces. However, there is little research to show where the best places are to add sound to improve usability. The approach described here is to integrate sound into widgets, the basic components of the human–computer interface. An overall structure for the integration of sound is presented. There are many problems with current graphical widgets and many of these are difficult to correct by using more graphics. This paper presents many of the standard graphical widgets and describes how sound can be added. It describes in detail usability problems with the widgets and then the non-speech sounds to overcome them. The non-speech sounds used are earcons. These sonically-enhanced widgets allow designers who are not sound experts to create interfaces that effectively improve usability and have coherent and consistent sounds

    Correcting menu usability problems with sound

    Get PDF
    Future human-computer interfaces will use more than just graphical output to display information. In this paper we suggest that sound and graphics together can be used to improve interaction. We describe an experiment to improve the usability of standard graphical menus by the addition of sound. One common difficulty is slipping off a menu item by mistake when trying to select it. One of the causes of this is insufficient feedback. We designed and experimentally evaluated a new set of menus with much more salient audio feedback to solve this problem. The results from the experiment showed a significant reduction in the subjective effort required to use the new sonically-enhanced menus along with significantly reduced error recovery times. A significantly larger number of errors were also corrected with sound

    Sonification System of Maps for Blind

    Get PDF

    The design and evaluation of non-visual information systems for blind users

    Get PDF
    This research was motivated by the sudden increase of hypermedia information (such as that found on CD-ROMs and on the World Wide Web), which was not initially accessible to blind people, although offered significant advantages over traditional braille and audiotape information. Existing non-visual information systems for blind people had very different designs and functionality, but none of them provided what was required according to user requirements studies: an easy-to-use non-visual interface to hypermedia material with a range of input devices for blind students. Furthermore, there was no single suitable design and evaluation methodology which could be used for the development of non-visual information systems. The aims of this research were therefore: (1) to develop a generic, iterative design and evaluation methodology consisting of a number of techniques suitable for formative evaluation of non-visual interfaces; (2) to explore non-visual interaction possibilities for a multimodal hypermedia browser for blind students based on user requirements; and (3) to apply the evaluation methodology to non-visual information systems at different stages of their development. The methodology developed and recommended consists of a range of complementary design and evaluation techniques, and successfully allowed the systematic development of prototype non-visual interfaces for blind users by identifying usability problems and developing solutions. Three prototype interfaces are described: the design and evaluation of two versions of a hypermedia browser; and an evaluation of a digital talking book. Recommendations made from the evaluations for an effective non-visual interface include the provision of a consistent multimodal interface, non-speech sounds for information and feedback, a range of simple and consistent commands for reading, navigation, orientation and output control, and support features. This research will inform developers of similar systems for blind users, and in addition, the methodology and design ideas are considered sufficiently generic, but also sufficiently detailed, that the findings could be applied successfully to the development of non-visual interfaces of any type

    Designing multimodal interaction for the visually impaired

    Get PDF
    Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access. This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination. Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices. Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance. In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction. The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks

    Designing user experiences: a game engine for the blind

    Get PDF
    Video games experience an ever-increasing interest by society since their inception on the 70’s. This form of computer entertainment may let the player have a great time with family and friends, or it may as well provide immersion into a story full of details and emotional content. Prior to the end user playing a video game, a huge effort is performed in lots of disciplines: screenwriting, scenery design, graphical design, programming, optimization or marketing are but a few examples. This work is done by game studios, where teams of professionals from different backgrounds join forces in the inception of the video game. From the perspective of Human-Computer Interaction, which studies how people interact with computers to complete tasks, a game developer can be regarded as a user whose task is to create the logic of a video game using a computer. One of the main foundations of HCI. is that an in-depth understanding of the user’s needs and preferences is vital for creating a usable piece of technology. This point is important as a single piece of technology (in this case, the set of tools used by a game developer) may – and should have been designed to – be used on the same team by users with different knowledge, abilities and capabilities. Embracing this diversity of users functional capabilities is the core foundation of accessibility, which is tightly related to and studied from the discipline of HCI. The driving force behind this research is a question that came after considering game developers: Could someone develop a video game being fully or partially blind? Would it be possible for these users to be part of a game development team? What should be taken into account to cover their particular needs and preferences so that they could perform this task being comfortable and productive? The goal of this work is to propose a possible solution that can assure inclusion of fully or partially blind users in the context of computer game development. To do this, a Used Centered Design methodology has been followed. This approach is ideal in this case as it starts including people you’re designing for and ends with new solutions that are tailor made to suit their needs. First, previously designed solutions for this problem and related works have been analyzed. Secondly, an exploratory study has been performed to know how should the target user be able to interact with a computer when developing games, and design insights are drawn from both the state of the art analysis and the study results. Next, a solution has been proposed based on the design insights, and a prototype has been implemented. The solution has been evaluated with accessibility guidelines. It has been finally concluded that the proposed solution is accessible for visually impaired users.Ingeniería Informátic

    Technical Document Accessibility

    Get PDF
    Electrical and Electronic Engineerin

    Voice and Touch Diagrams (VATagrams) Diagrams for the Visually Impaired

    Get PDF
    If a picture is worth a thousand words would you rather read the two pages of text or simply view the image? Most would choose to view the image; however, for the visually impaired this isn’t always an option. Diagrams assist people in visualizing relationships between objects. Most often these diagrams act as a source for quickly referencing information about relationships. Diagrams are highly visual and as such, there are few tools to support diagram creation for visually impaired individuals. To allow the visually impaired the ability to share the same advantages in school and work as sighted colleagues, an accessible diagram tool is needed. A suitable tool for the visually impaired to create diagrams should allow these individuals to: 1. easily define the type of relationship based diagram to be created, 2. easily create the components of a relationship based diagram, 3. easily modify the components of a relationship based diagram, 4. quickly understand the structure of a relationship based diagram, 5. create a visual representation which can be used by the sighted, and 6. easily accesses reference points for tracking diagram components. To do this a series of prototypes of a tool were developed that allow visually impaired users the ability to read, create, modify and share relationship based diagrams using sound and gestural touches. This was accomplished by creating a series of applications that could be run on an iPad using an overlay that restricts the areas in which a user can perform gestures. These prototypes were tested for usability using measures of efficiency, effectiveness and satisfaction. The prototypes were tested with visually impaired, blindfolded and sighted participants. The results of the evaluation indicate that the prototypes contain the main building blocks that can be used to complete a fully functioning application to be used on an iPad

    The Role of Sonification as a Code Navigation Aid: Improving Programming Structure Readability and Understandability For Non-Visual Users

    Get PDF
    Integrated Development Environments (IDEs) play an important role in the workflow of many software developers, e.g. providing syntactic highlighting or other navigation aids to support the creation of lengthy codebases. Unfortunately, such complex visual information is difficult to convey with current screen-reader technologies, thereby creating barriers for programmers who are blind, who are nevertheless using IDEs. This dissertation is focused on utilizing audio-based techniques to assist non-visual programmers when navigating through large amounts of code. Recently, audio generation techniques have seen major improvements in their capabilities to covey visually-based information to both sighted and non-visual users – making them a potential candidate for providing useful information, especially in places where information is visually structured. However, there is little known about the usability of such techniques in software development. Therefore, we investigated whether audio-based techniques capable of providing useful information about the code structure to assist non-visual programmers. The major contributions in this dissertation are split into two major parts: The first part of this dissertation explains our prior work that investigates the major challenges in software development faced by non-visual programmers, specifically code navigation difficulties. It also discusses areas of improvement where additional features could be developed in order to make the programming environment more accessible to non-visual programmers. The second part of this dissertation focuses on studies aimed to evaluate the usability and efficacy of audio-based techniques for conveying the structure of the programming codebase, which was suggested by the stakeholders in Part I. Specifically, we investigated various sound effects, audio parameters, and different interaction techniques to determine whether these techniques could provide adequate support to assist non-visual programmers when navigating through lengthy codebases. In Part II, we discussed the methodological aspects of evaluating the above-mentioned techniques with the stakeholders and examine these techniques using an audio-based prototype that was designed to control audio timing, locations, and methods of interaction. A set of design guidelines are provided based on the evaluation described previously to suggest including an auditory-based feedback system in the programming environment in efforts to improve code structure readability and understandability for assisting non-visual programmers
    • 

    corecore