1,686 research outputs found

    Feeling what you hear: tactile feedback for navigation of audio graphs

    Get PDF
    Access to digitally stored numerical data is currently very limited for sight impaired people. Graphs and visualizations are often used to analyze relationships between numerical data, but the current methods of accessing them are highly visually mediated. Representing data using audio feedback is a common method of making data more accessible, but methods of navigating and accessing the data are often serial in nature and laborious. Tactile or haptic displays could be used to provide additional feedback to support a point-and-click type interaction for the visually impaired. A requirements capture conducted with sight impaired computer users produced a review of current accessibility technologies, and guidelines were extracted for using tactile feedback to aid navigation. The results of a qualitative evaluation with a prototype interface are also presented. Providing an absolute position input device and tactile feedback allowed the users to explore the graph using tactile and proprioceptive cues in a manner analogous to point-and-click techniques

    Using Sonic Enhancement to Augment Non-Visual Tabular Navigation

    Get PDF
    More information is now readily available to computer users than at any time in human history; however, much of this information is often inaccessible to people with blindness or low-vision, for whom information must be presented non-visually. Currently, screen readers are able to verbalize on-screen text using text-to-speech (TTS) synthesis; however, much of this vocalization is inadequate for browsing the Internet. An auditory interface that incorporates auditory-spatial orientation was created and tested. For information that can be structured as a two-dimensional table, links can be semantically grouped as cells in a row within an auditory table, which provides a consistent structure for auditory navigation. An auditory display prototype was tested. Sixteen legally blind subjects participated in this research study. Results demonstrated that stereo panning was an effective technique for audio-spatially orienting non-visual navigation in a five-row, six-column HTML table as compared to a centered, stationary synthesized voice. These results were based on measuring the time- to-target (TTT), or the amount of time elapsed from the first prompting to the selection of each tabular link. Preliminary analysis of the TTT values recorded during the experiment showed that the populations did not conform to the ANOVA requirements of normality and equality of variances. Therefore, the data were transformed using the natural logarithm. The repeated-measures two-factor ANOVA results show that the logarithmically-transformed TTTs were significantly affected by the tonal variation method, F(1,15) = 6.194, p= 0.025. Similarly, the results show that the logarithmically transformed TTTs were marginally affected by the stereo spatialization method, F(1,15) = 4.240, p=0.057. The results show that the logarithmically transformed TTTs were not significantly affected by the interaction of both methods, F(1,15) = 1.381, p=0.258. These results suggest that some confusion may be caused in the subject when employing both of these methods simultaneously. The significant effect of tonal variation indicates that the effect is actually increasing the average TTT. In other words, the presence of preceding tones increases task completion time on average. The marginally-significant effect of stereo spatialization decreases the average log(TTT) from 2.405 to 2.264

    The Role of Sonification as a Code Navigation Aid: Improving Programming Structure Readability and Understandability For Non-Visual Users

    Get PDF
    Integrated Development Environments (IDEs) play an important role in the workflow of many software developers, e.g. providing syntactic highlighting or other navigation aids to support the creation of lengthy codebases. Unfortunately, such complex visual information is difficult to convey with current screen-reader technologies, thereby creating barriers for programmers who are blind, who are nevertheless using IDEs. This dissertation is focused on utilizing audio-based techniques to assist non-visual programmers when navigating through large amounts of code. Recently, audio generation techniques have seen major improvements in their capabilities to covey visually-based information to both sighted and non-visual users – making them a potential candidate for providing useful information, especially in places where information is visually structured. However, there is little known about the usability of such techniques in software development. Therefore, we investigated whether audio-based techniques capable of providing useful information about the code structure to assist non-visual programmers. The major contributions in this dissertation are split into two major parts: The first part of this dissertation explains our prior work that investigates the major challenges in software development faced by non-visual programmers, specifically code navigation difficulties. It also discusses areas of improvement where additional features could be developed in order to make the programming environment more accessible to non-visual programmers. The second part of this dissertation focuses on studies aimed to evaluate the usability and efficacy of audio-based techniques for conveying the structure of the programming codebase, which was suggested by the stakeholders in Part I. Specifically, we investigated various sound effects, audio parameters, and different interaction techniques to determine whether these techniques could provide adequate support to assist non-visual programmers when navigating through lengthy codebases. In Part II, we discussed the methodological aspects of evaluating the above-mentioned techniques with the stakeholders and examine these techniques using an audio-based prototype that was designed to control audio timing, locations, and methods of interaction. A set of design guidelines are provided based on the evaluation described previously to suggest including an auditory-based feedback system in the programming environment in efforts to improve code structure readability and understandability for assisting non-visual programmers

    Designing multimodal interaction for the visually impaired

    Get PDF
    Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access. This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination. Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices. Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance. In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction. The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks

    Synchronizing Audio and Haptic to Read Webpage

    Get PDF
    Constantly emerging technologies present new interactive ways to convey information on the Web. The new and enhanced website design has gradually improved sighted users‟ understanding on the Web content but on the other hand, it creates more obstacles to the visually impaired. The significant technological gap in assistive technology and the Web presents on-going challenges to maintain web accessibility, especially for disabled users. The limitations of current assistive technology to convey non-textual information including text attributes such as bold, underline, and italic from the Web further restrict the visually impaired from acquiring comprehensive understanding of the Web content. This project addresses this issues by investigating the problems faced by the visually impaired when using the current assistive technology. The significance of text attributes to support accessibility and improve understanding of the Web content is also being studied. For this purpose several qualitative and quantitative data collection methods are adopted to test the hypotheses. The project also examines the relationship between multimodal technology using audio and haptic modalities and the mental model generated by the visually impaired while accessing webpage. The findings are then used as a framework to develop a system that synchronizes audio and haptic to read webpages and represents text attributes to visually impaired users is to be develop. From the prototype built, pilot testing and user testing are conducted to evaluate the system. The result and recommendations are shared at the end of project for future enhancement

    Clique: Perceptually Based, Task Oriented Auditory Display for GUI Applications

    Get PDF
    Screen reading is the prevalent approach for presenting graphical desktop applications in audio. The primary function of a screen reader is to describe what the user encounters when interacting with a graphical user interface (GUI). This straightforward method allows people with visual impairments to hear exactly what is on the screen, but with significant usability problems in a multitasking environment. Screen reader users must infer the state of on-going tasks spanning multiple graphical windows from a single, serial stream of speech. In this dissertation, I explore a new approach to enabling auditory display of GUI programs. With this method, the display describes concurrent application tasks using a small set of simultaneous speech and sound streams. The user listens to and interacts solely with this display, never with the underlying graphical interfaces. Scripts support this level of adaption by mapping GUI components to task definitions. Evaluation of this approach shows improvements in user efficiency, satisfaction, and understanding with little development effort. To develop this method, I studied the literature on existing auditory displays, working user behavior, and theories of human auditory perception and processing. I then conducted a user study to observe problems encountered and techniques employed by users interacting with an ideal auditory display: another human being. Based on my findings, I designed and implemented a prototype auditory display, called Clique, along with scripts adapting seven GUI applications. I concluded my work by conducting a variety of evaluations on Clique. The results of these studies show the following benefits of Clique over the state of the art for users with visual impairments (1-5) and mobile sighted users (6): 1. Faster, accurate access to speech utterances through concurrent speech streams. 2. Better awareness of peripheral information via concurrent speech and sound streams. 3. Increased information bandwidth through concurrent streams. 4. More efficient information seeking enabled by ubiquitous tools for browsing and searching. 5. Greater accuracy in describing unfamiliar applications learned using a consistent, task-based user interface. 6. Faster completion of email tasks in a standard GUI after exposure to those tasks in audio

    Design and evaluation of auditory spatial cues for decision making within a game environment for persons with visual impairments

    Get PDF
    An audio platform game was created and evaluated in order to answer the question of whether or not an audio game could be designed that effectively conveys the spatial information necessary for persons with visual impairments to successfully navigate the game levels and respond to audio cues in time to avoid obstacles. The game used several types of audio cues (sounds and speech) to convey the spatial setup (map) of the game world. Most audio-only players seemed to be able to create a workable mental map from the game\u27s sound cues alone, pointing to potential for the further development of similar audio games for persons with visual impairments. The research also investigated the navigational strategies used by persons with visual impairments and the accuracy of the participants\u27 mental maps as a consequence of their navigational strategy. A comparisons of the maps created by visually impaired participants with those created by sighted participants playing the game with and without graphics, showed no statistically significant difference in map accuracy between groups. However, there was a marked difference between the number of invented objects when we compared this value between the sighted audio-only group and the other groups, which could serve as an area for future research

    Designing user experiences: a game engine for the blind

    Get PDF
    Video games experience an ever-increasing interest by society since their inception on the 70’s. This form of computer entertainment may let the player have a great time with family and friends, or it may as well provide immersion into a story full of details and emotional content. Prior to the end user playing a video game, a huge effort is performed in lots of disciplines: screenwriting, scenery design, graphical design, programming, optimization or marketing are but a few examples. This work is done by game studios, where teams of professionals from different backgrounds join forces in the inception of the video game. From the perspective of Human-Computer Interaction, which studies how people interact with computers to complete tasks, a game developer can be regarded as a user whose task is to create the logic of a video game using a computer. One of the main foundations of HCI. is that an in-depth understanding of the user’s needs and preferences is vital for creating a usable piece of technology. This point is important as a single piece of technology (in this case, the set of tools used by a game developer) may – and should have been designed to – be used on the same team by users with different knowledge, abilities and capabilities. Embracing this diversity of users functional capabilities is the core foundation of accessibility, which is tightly related to and studied from the discipline of HCI. The driving force behind this research is a question that came after considering game developers: Could someone develop a video game being fully or partially blind? Would it be possible for these users to be part of a game development team? What should be taken into account to cover their particular needs and preferences so that they could perform this task being comfortable and productive? The goal of this work is to propose a possible solution that can assure inclusion of fully or partially blind users in the context of computer game development. To do this, a Used Centered Design methodology has been followed. This approach is ideal in this case as it starts including people you’re designing for and ends with new solutions that are tailor made to suit their needs. First, previously designed solutions for this problem and related works have been analyzed. Secondly, an exploratory study has been performed to know how should the target user be able to interact with a computer when developing games, and design insights are drawn from both the state of the art analysis and the study results. Next, a solution has been proposed based on the design insights, and a prototype has been implemented. The solution has been evaluated with accessibility guidelines. It has been finally concluded that the proposed solution is accessible for visually impaired users.Ingeniería Informátic

    Case study of information searching experiences of high school students with visual impairments in Taiwan

    Get PDF
    • 

    corecore