178 research outputs found

    Personalizable edge services for Web accessibility

    Get PDF
    Web Content Accessibility guidelines by W3C (W3C Recommendation, May 1999. http://www.w3.org/TR/WCAG10/) provide several suggestions for Web designers regarding how to author Web pages in order to make them accessible to everyone. In this context, this paper proposes the use of edge services as an efficient and general solution to promote accessibility and breaking down the digital barriers that inhibit users with disabilities to actively participate to any aspect of society. The idea behind edge services mainly affect the advantages of a personalized navigation in which contents are tailored according to different issues, such as client’s devices capabilities, communication systems and network conditions and, finally, preferences and/or abilities of the growing number of users that access the Web. To meet these requirements, Web designers have to efficiently provide content adaptation and personalization functionalities mechanisms in order to guarantee universal access to the Internet content. The so far dominant paradigm of communication on the WWW, due to its simple request/response model, cannot efficiently address such requirements. Therefore, it must be augmented with new components that attempt to enhance the scalability, the performances and the ubiquity of the Web. Edge servers, acting on the HTTP data flow exchanged between client and server, allow on-the-fly content adaptation as well as other complex functionalities beyond the traditional caching and content replication services. These value-added services are called edge services and include personalization and customization, aggregation from multiple sources, geographical personalization of the navigation of pages (with insertion/emphasis of content that can be related to the user’s geographical location), translation services, group navigation and awareness for social navigation, advanced services for bandwidth optimization such as adaptive compression and format transcoding, mobility, and ubiquitous access to Internet content. This paper presents Personalizable Accessible Navigation (Pan) that is a set of edge services designed to improve Web pages accessibility, developed and deployed on top of a programmable intermediary framework. The characteristics and the location of the services, i.e., provided by intermediaries, as well as the personalization and the opportunities to select multiple profiles make Pan a platform that is especially suitable for accessing the Web seamlessly also from mobile terminals

    Making Scientific and Technical Materials Pervasively Accessible

    Get PDF
    In this paper, the question is explored of what policies, standards and practices are desirable to ensure that hardware, software and publications in the sciences and associated disciplines are created from the outset to be accessible to people with disabilities. Insight into this question can be obtained by considering the unique accessibility challenges that these materials pose, including complexities of notation, language, and graphical representation. Having analyzed what sets this problem apart from broader issues of accessibility, the advantages and limitations of current international standards are reviewed, and contemporary developments in standards and policies are considered from a strategic perspective. These developments include the establishment of accessibility requirements for e-books and e-readers under the European Accessibility Act, the potential role of process-oriented accessibility standards such as ISO/IEC 30071-1:2019, and opportunities for enhancing the standards applicable to scientific materials via future revisions of the World Wide Web Consortium’s Web Content Accessibility Guidelines (WCAG). The accessibility of scientific and technical content is ultimately supported by several interrelated human rights recognized in international disability rights law, which constitute a foundation for further evaluation and development of policies. It is argued that attaining pervasive accessibility in scientific and technical fields requires an unprecedented level of commitment and collaboration among educators, scientists, content and software producers, regulators, and people with disabilities

    Feeling what you hear: tactile feedback for navigation of audio graphs

    Get PDF
    Access to digitally stored numerical data is currently very limited for sight impaired people. Graphs and visualizations are often used to analyze relationships between numerical data, but the current methods of accessing them are highly visually mediated. Representing data using audio feedback is a common method of making data more accessible, but methods of navigating and accessing the data are often serial in nature and laborious. Tactile or haptic displays could be used to provide additional feedback to support a point-and-click type interaction for the visually impaired. A requirements capture conducted with sight impaired computer users produced a review of current accessibility technologies, and guidelines were extracted for using tactile feedback to aid navigation. The results of a qualitative evaluation with a prototype interface are also presented. Providing an absolute position input device and tactile feedback allowed the users to explore the graph using tactile and proprioceptive cues in a manner analogous to point-and-click techniques

    DOKY: A Multi-Modal User Interface for Non-Visual Presentation, Navigation and Manipulation of Structured Documents on Mobile and Wearable Devices

    Get PDF
    There are a large number of highly structured documents available on the Internet. The logical document structure is very important for the reader in order to efficiently handling the document content. In graphical user interfaces, each logical structure element is presented by a specific visualisation, a graphical icon. This representation allows visual readers to recognise the structure at a glance. Another advantage is that it enables direct navigation and manipulation. Blind and visually impaired persons are unable to use graphical user interfaces and for the emerging category of mobile and wearable devices, where there are only small visual displays available or no visual display at all, a non-visual alternative is required too. A multi-modal user interface for non-visual presentation, navigation and manipulation of structured documents on mobile and wearable devices like smart phones, smart watches or smart tablets has been developed as a result of inductive research among 205 blind and visually impaired participants. It enables the user to get a fast overview over the document structure and to efficiently skim and scan over the document content by identifying the type, level, position, length, relationship and content text of each element as well as to focus, select, activate, move, remove and insert structure elements or text. These interactions are presented in a non-visual way using Earcons, Tactons and synthetic speech utterances, serving the auditory and tactile human sense. Navigation and manipulation is provided by using the multitouch, motion (linear acceleration and rotation) or speech recognition input modality. It is a complete solution for reading, creating and editing structured documents in a non-visual way. There is no special hardware required. The name DOKY is derived from a short form of the terms document, and accessibility. A flexible platform-independent and event-driven software architecture implementing the DOKY user interface as well as the automated structured observation research method employed for the investigation into the effectiveness of the proposed user interface has been presented. Because it is platform- and language-neutral, it can be used in a wide variety of platforms, environments and applications for mobile and wearable devices. Each component is defined by interfaces and abstract classes only, so that it can be easily changed or extended, and grouped in a semantically self-containing package. An investigation into the effectiveness of the proposed DOKY user interface has been carried out to see whether the proposed user interface design concepts and user interaction design concepts are effective means for non-visual presentation, navigation and manipulation of structured documents on mobile and wearable devices, by automated structured observations of 876 blind and visually impaired research subjects performing 19 exercises among a highly structured example document using the DOKY Structured Observation App on their own mobile or wearable device remotely over the Internet. The results showed that the proposed user interface design concepts for presentation and navigation and the user interaction design concepts for manipulation are effective and that their effectiveness depends on the input modality and hardware device employed as well as on the use of screen readers

    A Support System for Graphics for Visually Impaired People

    Get PDF
    As the Internet plays an important role in today’s society, graphics is widely used to present, convey and communicate information in many different areas. Complex information is often easier to understand and analyze by graphics. Even though graphics plays an important role, accessibility support is very limited for web graphics. Web graphics accessibility is not only for people with disabilities, but also for people who want to get and use information in ways different from the ones originally intended. One of the problems regarding graphics for blind people is that we have few data on how a blind person draws or how he/she receives graphical information. Based on Katz’s pupils’ research, one concludes that blind people can draw in outline and that they have a good sense of three-dimensional shape and space. In this thesis, I propose and develop a system, which can serve as a tool to be used by researchers investigating these and related issues. Our support system is built to collect the drawings from visually impaired people by finger movement on Braille devices or touch devices, such as tablets. When the drawing data is collected, the system will automatically generate the graphical XML data, which are easily accessed by applications and web services. The graphical XML data are stored locally or remotely. Compared to other support systems, our system is the first automatic system to provide web services to collect and access such data. The system also has the capability to integrate into cloud computing so that people can use the system anywhere to collect and access the data

    Rotate-and-Press: A Non-Visual Alternative to Point-and-Click

    Get PDF
    Most computer applications manifest visually rich and dense graphical user interfaces (GUIs) that are primarily tailored for an easy-and-efficient sighted interaction using a combination of two default input modalities, namely the keyboard and the mouse/touchpad. However, blind screen-reader users predominantly rely only on keyboard, and therefore struggle to interact with these applications, since it is both arduous and tedious to perform the visual \u27point-and-click\u27 tasks such as accessing the various application commands/features using just keyboard shortcuts supported by screen readers. In this paper, we investigate the suitability of a \u27rotate-and-press\u27 input modality as an effective non-visual substitute for the visual mouse to easily interact with computer applications, with specific focus on word processing applications serving as the representative case study. In this regard, we designed and developed bTunes, an add-on for Microsoft Word that customizes an off-the-shelf Dial input device such that it serves as a surrogate mouse for blind screen-reader users to quickly access various application commands and features using a set of simple rotate and press gestures supported by the Dial. Therefore, with bTunes, blind users too can now enjoy the benefits of two input modalities, as their sighted counterparts. A user study with 15 blind participants revealed that bTunes significantly reduced both the time and number of user actions for doing representative tasks in a word processing application, by as much as 65.1% and 36.09% respectively. The participants also stated that they did not face any issues switching between keyboard and Dial, and furthermore gave a high usability rating (84.66 avg. SUS score) for bTunes

    Using Sonic Enhancement to Augment Non-Visual Tabular Navigation

    Get PDF
    More information is now readily available to computer users than at any time in human history; however, much of this information is often inaccessible to people with blindness or low-vision, for whom information must be presented non-visually. Currently, screen readers are able to verbalize on-screen text using text-to-speech (TTS) synthesis; however, much of this vocalization is inadequate for browsing the Internet. An auditory interface that incorporates auditory-spatial orientation was created and tested. For information that can be structured as a two-dimensional table, links can be semantically grouped as cells in a row within an auditory table, which provides a consistent structure for auditory navigation. An auditory display prototype was tested. Sixteen legally blind subjects participated in this research study. Results demonstrated that stereo panning was an effective technique for audio-spatially orienting non-visual navigation in a five-row, six-column HTML table as compared to a centered, stationary synthesized voice. These results were based on measuring the time- to-target (TTT), or the amount of time elapsed from the first prompting to the selection of each tabular link. Preliminary analysis of the TTT values recorded during the experiment showed that the populations did not conform to the ANOVA requirements of normality and equality of variances. Therefore, the data were transformed using the natural logarithm. The repeated-measures two-factor ANOVA results show that the logarithmically-transformed TTTs were significantly affected by the tonal variation method, F(1,15) = 6.194, p= 0.025. Similarly, the results show that the logarithmically transformed TTTs were marginally affected by the stereo spatialization method, F(1,15) = 4.240, p=0.057. The results show that the logarithmically transformed TTTs were not significantly affected by the interaction of both methods, F(1,15) = 1.381, p=0.258. These results suggest that some confusion may be caused in the subject when employing both of these methods simultaneously. The significant effect of tonal variation indicates that the effect is actually increasing the average TTT. In other words, the presence of preceding tones increases task completion time on average. The marginally-significant effect of stereo spatialization decreases the average log(TTT) from 2.405 to 2.264

    Sonification of information security events in auditory display: Text vocalization, navigation, and event flow representation

    Get PDF
    This research is dedicated to developing an information security tool with a sound interface. If there is a possibility to manage information security by ear, analysis of computer attacks could be effectively maintained by people with vision problems. For this purpose, a human-computer interface is required in which the signs of malicious code and computer attacks are encoded using sounds. This research highlights the features returned by the console tools for static analysis of executable files, as well as an audio coding method for auditory expression of textual and non-textual features is proposed. In order to provide visually impaired people with the opportunity to work in the cybersecurity field, we propose a method of analysing malware by ear.Peer Reviewe

    Non-speech auditory output

    Get PDF
    No abstract available
    • …
    corecore