52 research outputs found

    Non-Visual Representation of Complex Documents for Use in Digital Talking Books

    Get PDF
    Essential written information such as text books, bills, and catalogues needs to be accessible by everyone. However, access is not always available to vision-impaired people. As they require electronic documents to be available in specific formats. In order to address the accessibility issues of electronic documents, this research aims to design an affordable, portable, standalone and simple to use complete reading system that will convert and describe complex components in electronic documents to print disabled users

    Non-visual representation of complex documents for use in digital talking books

    Get PDF
    According to a World Intellectual Property Organization (WIPO) estimation, only 5% of the world's one million print titles that are published every year are accessible to the approximately 340 million blind, visually impaired or print disabled people. Equal access to information is a basic right of all people. Essen- tial information such as flyers, brochures, event calendars, programs, catalogues and booking information needs to be accessible by everyone. Information helps people to make decisions, be involved in society and live independent lives. Ar- ticle 21, Section 4.2. of the United Nation's Convention on the rights of people with disabilities advocates the right of blind and partially sighted people to take control of their own lives. However, this entitlement is not always available to them without access to information. Today, electronic documents have become pervasive. For vision-impaired people electronic documents need to be available in specific formats to be accessible. If these formats are not made available, vision-impaired people are greatly disadvantaged when compared to the general population. Therefore, addressing electronic document accessibility for them is an extremely important concern. In order to address the accessibility issues of electronic documents, this research aims to design an affordable, portable, stand-alone and simple to use "Complete Reading System" to provide accessible electronic documents to vision impaired

    DOKY: A Multi-Modal User Interface for Non-Visual Presentation, Navigation and Manipulation of Structured Documents on Mobile and Wearable Devices

    Get PDF
    There are a large number of highly structured documents available on the Internet. The logical document structure is very important for the reader in order to efficiently handling the document content. In graphical user interfaces, each logical structure element is presented by a specific visualisation, a graphical icon. This representation allows visual readers to recognise the structure at a glance. Another advantage is that it enables direct navigation and manipulation. Blind and visually impaired persons are unable to use graphical user interfaces and for the emerging category of mobile and wearable devices, where there are only small visual displays available or no visual display at all, a non-visual alternative is required too. A multi-modal user interface for non-visual presentation, navigation and manipulation of structured documents on mobile and wearable devices like smart phones, smart watches or smart tablets has been developed as a result of inductive research among 205 blind and visually impaired participants. It enables the user to get a fast overview over the document structure and to efficiently skim and scan over the document content by identifying the type, level, position, length, relationship and content text of each element as well as to focus, select, activate, move, remove and insert structure elements or text. These interactions are presented in a non-visual way using Earcons, Tactons and synthetic speech utterances, serving the auditory and tactile human sense. Navigation and manipulation is provided by using the multitouch, motion (linear acceleration and rotation) or speech recognition input modality. It is a complete solution for reading, creating and editing structured documents in a non-visual way. There is no special hardware required. The name DOKY is derived from a short form of the terms document, and accessibility. A flexible platform-independent and event-driven software architecture implementing the DOKY user interface as well as the automated structured observation research method employed for the investigation into the effectiveness of the proposed user interface has been presented. Because it is platform- and language-neutral, it can be used in a wide variety of platforms, environments and applications for mobile and wearable devices. Each component is defined by interfaces and abstract classes only, so that it can be easily changed or extended, and grouped in a semantically self-containing package. An investigation into the effectiveness of the proposed DOKY user interface has been carried out to see whether the proposed user interface design concepts and user interaction design concepts are effective means for non-visual presentation, navigation and manipulation of structured documents on mobile and wearable devices, by automated structured observations of 876 blind and visually impaired research subjects performing 19 exercises among a highly structured example document using the DOKY Structured Observation App on their own mobile or wearable device remotely over the Internet. The results showed that the proposed user interface design concepts for presentation and navigation and the user interaction design concepts for manipulation are effective and that their effectiveness depends on the input modality and hardware device employed as well as on the use of screen readers

    Barrier-free communication: methods and products : proceedings of the 1st Swiss conference on barrier-free communication

    Get PDF

    A framework for the assembly and delivery of multimodal graphics in E-learning environments

    Get PDF
    In recent years educators and education institutions have embraced E-Learning environments as a method of delivering content to and communicating with their learners. Particular attention needs to be paid to the accessibility of the content that each educator provides. In relation to graphics, content providers are instructed to provide textual alternatives for each graphic using either the “alt” attribute or the “longdesc” attribute of the HTML IMG tag. This is not always suitable for graphical concepts inherent in technical topics due to the spatial nature of the information. As there is currently no suggested alternative to the use of textual descriptions in E-Learning environments, blind learners are at a signiïŹcant disadvantage when attempting to learn Science, Technology, Engineering or Mathematical (STEM) subjects online. A new approach is required that will provide blind learners with the same learning capabilities enjoyed by their sighted peers in relation to graphics. Multimodal graphics combine the modalities of sound and touch in order to deliver graphical concepts to blind learners. Although they have proven successful, they can be time consuming to create and often require expertise in accessible graphic design. This thesis proposes an approach based on mainstream E-Learning techniques that can support non-experts in the assembly of multimodal graphics. The approach is known as the Multimodal Graphic Assembly and Delivery Framework (MGADF). It exploits a component based Service Oriented Architecture (SOA) to provide non experts with the ability to assemble multimodal graphics and integrate them into mainstream E-Learning environments. This thesis details the design of the system architecture, information architecture and methodologies of the MGADF. Proof of concept interfaces were implemented, based on the design, that clearly demonstrate the feasibility of the approach. The interfaces were used in an end-user evaluation that assessed the beneïŹts of a component based approach for non-expert multimodal graphic producers

    Taux : a system for evaluating sound feedback in navigational tasks

    Get PDF
    This thesis presents the design and development of an evaluation system for generating audio displays that provide feedback to persons performing navigation tasks. It first develops the need for such a system by describing existing wayfinding solutions, investigating new electronic location-based methods that have the potential of changing these solutions and examining research conducted on relevant audio information representation techniques. An evaluation system that supports the manipulation of two basic classes of audio display is then described. Based on prior work on wayfinding with audio display, research questions are developed that investigate the viability of different audio displays. These are used to generate hypotheses and develop an experiment which evaluates four variations of audio display for wayfinding. Questions are also formulated that evaluate a baseline condition that utilizes visual feedback. An experiment which tests these hypotheses on sighted users is then described. Results from the experiment suggest that spatial audio combined with spoken hints is the best approach of the approaches comparing spatial audio. The test experiment results also suggest that muting a varying audio signal when a subject is on course did not improve performance. The system and method are then refined. A second experiment is conducted with improved displays and an improved experiment methodology. After adding blindfolds for sighted subjects and increasing the difficulty of navigation tasks by reducing the arrival radius, similar comparisons were observed. Overall, the two experiments demonstrate the viability of the prototyping tool for testing and refining multiple different audio display combinations for navigational tasks. The detailed contributions of this work and future research opportunities conclude this thesis

    Software Usability

    Get PDF
    This volume delivers a collection of high-quality contributions to help broaden developers’ and non-developers’ minds alike when it comes to considering software usability. It presents novel research and experiences and disseminates new ideas accessible to people who might not be software makers but who are undoubtedly software users

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data

    Designing a New Tactile Display Technology and its Disability Interactions

    Get PDF
    People with visual impairments have a strong desire for a refreshable tactile interface that can provide immediate access to full page of Braille and tactile graphics. Regrettably, existing devices come at a considerable expense and remain out of reach for many. The exorbitant costs associated with current tactile displays stem from their intricate design and the multitude of components needed for their construction. This underscores the pressing need for technological innovation that can enhance tactile displays, making them more accessible and available to individuals with visual impairments. This research thesis delves into the development of a novel tactile display technology known as Tacilia. This technology's necessity and prerequisites are informed by in-depth qualitative engagements with students who have visual impairments, alongside a systematic analysis of the prevailing architectures underpinning existing tactile display technologies. The evolution of Tacilia unfolds through iterative processes encompassing conceptualisation, prototyping, and evaluation. With Tacilia, three distinct products and interactive experiences are explored, empowering individuals to manually draw tactile graphics, generate digitally designed media through printing, and display these creations on a dynamic pin array display. This innovation underscores Tacilia's capability to streamline the creation of refreshable tactile displays, rendering them more fitting, usable, and economically viable for people with visual impairments

    E-Learning

    Get PDF
    E-learning enables students to pace their studies according to their needs, making learning accessible to (1) people who do not have enough free time for studying - they can program their lessons according to their available schedule; (2) those far from a school (geographical issues), or the ones unable to attend classes due to some physical or medical restriction. Therefore, cultural, geographical and physical obstructions can be removed, making it possible for students to select their path and time for the learning course. Students are then allowed to choose the main objectives they are suitable to fulfill. This book regards E-learning challenges, opening a way to understand and discuss questions related to long-distance and lifelong learning, E-learning for people with special needs and, lastly, presenting case study about the relationship between the quality of interaction and the quality of learning achieved in experiences of E-learning formation
    • 

    corecore