9,259 research outputs found
Recommended from our members
An exploratory study of screen-reader users navigating the web
textResearchers have learned much about how sighted individuals seek information on Web sites - for example, users follow "information scent" as they move from page to page, and individual differences may impact successful information seeking on the Web. While it is possible that individuals with disabilities, especially those with severe visual impairments, perform information-seeking activities in a similar manner, little is known about how individuals who use screen readers to navigate actually seek information on the Web. In this study, we used both qualitative and quantitative measures to investigate the Web navigation techniques of four screen-reader users and how a user’s experience affects these navigation techniques and his or her ability to successfully complete an information-finding task. We compared metrics for between-page and within-page navigation to studies of sighted users. We also considered how a Web site’s compliance with Section 508 guidelines affects the overall information-finding experience of a visually-impaired individual. We discovered that among the four individuals in this study, user experience was not necessarily indicative of a successful information-finding experience. As individuals, the participants' navigation techniques varied widely; as a group, they generally searched more frequently and used the back button less frequently than has been reported for sighted individuals. Screen-reader users in this study followed a more flimsy, linear navigation style and generally used scrolling actions rather than searching actions. When using a Web site that has a Section 508 compliant home page, we found that the screen-reader users in this study completed information-finding tasks significantly more quickly, used significantly fewer actions, and reported a more satisfying information-finding experience. They were also more successful at finding the information goal and encountered fewer impasses. Using both quantitative and qualitative measures was critical in this study. The quantitative metrics allowed us to compare values and the qualitative data provided additional insight into individual differences as well as allowing a deeper understanding of the quantitative data. The information from this study contributes to the growing body of research knowledge about screen-reader users. It also contributes a new understanding of screen-reader users that can be used by the worldwide community of Web developers, designers, and users.Informatio
Recommended from our members
Multimedia broadcast and internet satellite system design and user trial results
The EU funded project, System for Advanced Multimedia Broadcast
and IT Services (SAMBITS), has created an enhanced and synchronised,
multimedia terminal for merging satellite broadcast and internet
telecommunication services in a way that efficiently combines the large
bandwidth of the broadcast channel and the interactivity of the internet.
This paper proposes a novel broadcast and internet service concept, illustrates
this concept with two service scenarios and develops a system architecture to
demonstrate the range of key benefits provided by these new technologies.
It then describes the interactive multimedia terminal that was used for
consuming this new service concept. Finally, the results of the user trials on the
terminal are presented and discussed
Using Sonic Enhancement to Augment Non-Visual Tabular Navigation
More information is now readily available to computer users than at any time in human history; however, much of this information is often inaccessible to people with blindness or low-vision, for whom information must be presented non-visually. Currently, screen readers are able to verbalize on-screen text using text-to-speech (TTS) synthesis; however, much of this vocalization is inadequate for browsing the Internet. An auditory interface that incorporates auditory-spatial orientation was created and tested. For information that can be structured as a two-dimensional table, links can be semantically grouped as cells in a row within an auditory table, which provides a consistent structure for auditory navigation. An auditory display prototype was tested.
Sixteen legally blind subjects participated in this research study. Results demonstrated that stereo panning was an effective technique for audio-spatially orienting non-visual navigation in a five-row, six-column HTML table as compared to a centered, stationary synthesized voice. These results were based on measuring the time- to-target (TTT), or the amount of time elapsed from the first prompting to the selection of each tabular link. Preliminary analysis of the TTT values recorded during the experiment showed that the populations did not conform to the ANOVA requirements of normality and equality of variances. Therefore, the data were transformed
using the natural logarithm. The repeated-measures two-factor ANOVA results show that the logarithmically-transformed TTTs were significantly affected by the tonal variation method, F(1,15) = 6.194, p= 0.025. Similarly, the results show that the logarithmically transformed TTTs were marginally affected by the stereo spatialization method, F(1,15) = 4.240, p=0.057. The results show that the logarithmically transformed TTTs were not significantly affected by the interaction of both methods, F(1,15) = 1.381, p=0.258. These results suggest that some confusion may be caused in the subject when employing both of these methods simultaneously. The significant effect of tonal variation indicates that the effect is actually increasing the average TTT. In other words, the presence of preceding tones increases task completion time on average. The marginally-significant effect of stereo spatialization decreases the average log(TTT) from 2.405 to 2.264
Accessibility and adaptability of learning objects: responding to metadata, learning patterns and profiles of needs and preferences
The case for learning patterns as a design method for accessible and adaptable learning objects is explored. Patterns and templates for the design of learning objects can be derived from successful existing learning resources. These patterns can then be reused in the design of new learning objects. We argue that by attending to criteria for reuse in the definition of these patterns and in the subsequent design of new learning objects, those new resources can be themselves reusable and also adaptable to different learning contexts. Finally, if the patterns identified can be implemented as templates for standard authoring tools, the design of effective, reusable and adaptable resources can be made available to those with limited skills in multimedia authoring and result in learning resources that are more widely accessible
EXPLORING THE STAGES OF INFORMATION SEEKING IN A CROSS-MODAL CONTEXT
Previous studies of users with visual impairments access to the web have focused on human-web interaction. This study explores the under investigated area of cross-modal collaborative information seeking (CCIS), that is, the challenges and opportunities that exist in supporting visually impaired (VI) users to take an effective part in collaborative web search tasks with sighted peers. We conducted an observational study to investigate the process with fourteen pairs of VI and sighted users in co-located and distributed settings. The study examined the effects of cross-modal collaborative interaction on the stages of the individual Information Seeking (IS) process. The findings showed that the different stages of the process were performed individually most of the time; however it was observed that some collaboration took place in the results exploration and management stages. The accessibility challenges faced by VI users affected their individual and collaborative interaction and also enforced certain points of collaboration. The paper concludes with some recommendations towards improving the accessibility of cross-modal collaborative search.Peer Reviewe
Feeling what you hear: tactile feedback for navigation of audio graphs
Access to digitally stored numerical data is currently very limited for sight impaired people. Graphs and visualizations are often used to analyze relationships between numerical data, but the current methods of accessing them are highly visually mediated. Representing data using audio feedback is a common method of making data more accessible, but methods of navigating and accessing the data are often serial in nature and laborious. Tactile or haptic displays could be used to provide additional feedback to support a point-and-click type interaction for the visually impaired. A requirements capture conducted with sight impaired computer users produced a review of current accessibility technologies, and guidelines were extracted for using tactile feedback to aid navigation. The results of a qualitative evaluation with a prototype interface are also presented. Providing an absolute position input device and tactile feedback allowed the users to explore the graph using tactile and proprioceptive cues in a manner analogous to point-and-click techniques
The Role of Sonification as a Code Navigation Aid: Improving Programming Structure Readability and Understandability For Non-Visual Users
Integrated Development Environments (IDEs) play an important role in the workflow of many software developers, e.g. providing syntactic highlighting or other navigation aids to support the creation of lengthy codebases. Unfortunately, such complex visual information is difficult to convey with current screen-reader technologies, thereby creating barriers for programmers who are blind, who are nevertheless using IDEs.
This dissertation is focused on utilizing audio-based techniques to assist non-visual programmers when navigating through large amounts of code. Recently, audio generation techniques have seen major improvements in their capabilities to covey visually-based information to both sighted and non-visual users – making them a potential candidate for providing useful information, especially in places where information is visually structured. However, there is little known about the usability of such techniques in software development. Therefore, we investigated whether audio-based techniques capable of providing useful information about the code structure to assist non-visual programmers. The major contributions in this dissertation are split into two major parts:
The first part of this dissertation explains our prior work that investigates the major challenges in software development faced by non-visual programmers, specifically code navigation difficulties. It also discusses areas of improvement where additional features could be developed in order to make the programming environment more accessible to non-visual programmers.
The second part of this dissertation focuses on studies aimed to evaluate the usability and efficacy of audio-based techniques for conveying the structure of the programming codebase, which was suggested by the stakeholders in Part I. Specifically, we investigated various sound effects, audio parameters, and different interaction techniques to determine whether these techniques could provide adequate support to assist non-visual programmers when navigating through lengthy codebases. In Part II, we discussed the methodological aspects of evaluating the above-mentioned techniques with the stakeholders and examine these techniques using an audio-based prototype that was designed to control audio timing, locations, and methods of interaction. A set of design guidelines are provided based on the evaluation described previously to suggest including an auditory-based feedback system in the programming environment in efforts to improve code structure readability and understandability for assisting non-visual programmers
- …