7 research outputs found

    A method to provide accessibility for visual components to vision impaired

    Get PDF
    Non-textual graphical information (line graphs, bar charts, pie charts, etc.) are increasingly pervasive in digital scientific literature and business reports which enabling readers to easily acquire the nature of the underlying information . These graphical components are commonly used to present data in an easy-to interpret way. Graphs are frequently used in economics, mathematics and other scientific subjects. In general term data visualization techniques are useless for blind people. Being unable to access graphical information easily is a major obstacle to blind people in pursuing a scientific study and careers .This paper suggests a method to extract implicit information of Bar chart, Pie chart, Line chart and math’s graph components of an electronic document and present them to vision impaired users in audio format. The goal is to provide simple to use, efficient, and available presentation schemes for non textual which can help vision impaired users in comprehending form without needing any further devices or equipments. A software application has been developed based on this research. The output of application is a textual summary of the graphic including the core content of the hypothesized intended message of the graphic designer. The textual summary of the graphic is then conveyed to the user by Text to Speech software .The benefit of this approach is automatic providing the user with the message and knowledge that one would gain from viewing t

    Conceptual design model of Assistive Courseware for Low Vision (AC4LV) learners

    Get PDF
    This paper describes an ongoing study related to the design of conceptual design model which specific to learning content application for low vision learners.Reviews from literature s indicate that content application such as course ware which is specifically designed to cater the needs of low vision learners in learning is highly scarce.It was found that most of the existing content applications including courseware focus to the needs of normal student, in which most of this course ware mean too little to the low vision learners in terms of information accessibility, navigation ability, and pleasure aspects.In addition, the use of Assistive Technology (AT) such as magnifying glass was also problematic for them.Thus, this study aims at creating an alternative content application particularly courseware for low vision learners.It is called as Assistive Courseware for Low Vision (AC4LV).Prior to develop an AC4LV a specific design model has to be proposed as guidance for the developer to refer to.So, this paper proposes a Conceptual Design Model of AC4LV by utilizing three phases of activities.Future works is to validate the proposed model through expert review and prototyping method

    Voice and Touch Diagrams (VATagrams) Diagrams for the Visually Impaired

    Get PDF
    If a picture is worth a thousand words would you rather read the two pages of text or simply view the image? Most would choose to view the image; however, for the visually impaired this isn’t always an option. Diagrams assist people in visualizing relationships between objects. Most often these diagrams act as a source for quickly referencing information about relationships. Diagrams are highly visual and as such, there are few tools to support diagram creation for visually impaired individuals. To allow the visually impaired the ability to share the same advantages in school and work as sighted colleagues, an accessible diagram tool is needed. A suitable tool for the visually impaired to create diagrams should allow these individuals to: 1. easily define the type of relationship based diagram to be created, 2. easily create the components of a relationship based diagram, 3. easily modify the components of a relationship based diagram, 4. quickly understand the structure of a relationship based diagram, 5. create a visual representation which can be used by the sighted, and 6. easily accesses reference points for tracking diagram components. To do this a series of prototypes of a tool were developed that allow visually impaired users the ability to read, create, modify and share relationship based diagrams using sound and gestural touches. This was accomplished by creating a series of applications that could be run on an iPad using an overlay that restricts the areas in which a user can perform gestures. These prototypes were tested for usability using measures of efficiency, effectiveness and satisfaction. The prototypes were tested with visually impaired, blindfolded and sighted participants. The results of the evaluation indicate that the prototypes contain the main building blocks that can be used to complete a fully functioning application to be used on an iPad

    Designing multimodal interaction for the visually impaired

    Get PDF
    Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access. This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination. Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices. Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance. In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction. The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks

    Designing usable mobile interfaces for spatial data

    Get PDF
    2010 - 2011This dissertation deals mainly with the discipline of Human-­‐Computer Interaction (HCI), with particular attention on the role that it plays in the domain of modern mobile devices. Mobile devices today offer a crucial support to a plethora of daily activities for nearly everyone. Ranging from checking business mails while traveling, to accessing social networks while in a mall, to carrying out business transactions while out of office, to using all kinds of online public services, mobile devices play the important role to connect people while physically apart. Modern mobile interfaces are therefore expected to improve the user's interaction experience with the surrounding environment and offer different adaptive views of the real world. The goal of this thesis is to enhance the usability of mobile interfaces for spatial data. Spatial data are particular data in which the spatial component plays an important role in clarifying the meaning of the data themselves. Nowadays, this kind of data is totally widespread in mobile applications. Spatial data are present in games, map applications, mobile community applications and office automations. In order to enhance the usability of spatial data interfaces, my research investigates on two major issues: 1. Enhancing the visualization of spatial data on small screens 2. Enhancing the text-­‐input methods I selected the Design Science Research approach to investigate the above research questions. The idea underling this approach is “you build artifact to learn from it”, in other words researchers clarify what is new in their design. The new knowledge carried out from the artifact will be presented in form of interaction design patterns in order to support developers in dealing with issues of mobile interfaces. The thesis is organized as follows. Initially I present the broader context, the research questions and the approaches I used to investigate them. Then the results are split into two main parts. In the first part I present the visualization technique called Framy. The technique is designed to support users in visualizing geographical data on mobile map applications. I also introduce a multimodal extension of Framy obtained by adding sounds and vibrations. After that I present the process that turned the multimodal interface into a means to allow visually impaired users to interact with Framy. Some projects involving the design principles of Framy are shown in order to demonstrate the adaptability of the technique in different contexts. The second part concerns the issue related to text-­‐input methods. In particular I focus on the work done in the area of virtual keyboards for mobile devices. A new kind of virtual keyboard called TaS provides users with an input system more efficient and effective than the traditional QWERTY keyboard. Finally, in the last chapter, the knowledge acquired is formalized in form of interaction design patterns. [edited by author]X n.s

    Spatial Auditory Maps for Blind Travellers

    Get PDF
    Empirical research shows that blind persons who have the ability and opportunity to access geographic map information tactually, benefit in their mobility. Unfortunately, tangible maps are not found in large numbers. Economics is the leading explanation: tangible maps are expensive to build, duplicate and distribute. SAM, short for Spatial Auditory Map, is a prototype created to address the unavail- ability of tangible maps. SAM presents geographic information to a blind person encoded in sound. A blind person receives maps electronically and accesses them using a small in- expensive digitalizing tablet connected to a PC. The interface provides location-dependent sound as a stylus is manipulated by the user, plus a schematic visual representation for users with residual vision. The assessment of SAM on a group of blind participants suggests that blind users can learn unknown environments as complex as the ones represented by tactile maps - in the same amount of reading time. This research opens new avenues in visualization techniques, promotes alternative communication methods, and proposes a human-computer interaction framework for conveying map information to a blind person

    Tabletop tangible maps and diagrams for visually impaired users

    Get PDF
    En dépit de leur omniprésence et de leur rôle essentiel dans nos vies professionnelles et personnelles, les représentations graphiques, qu'elles soient numériques ou sur papier, ne sont pas accessibles aux personnes déficientes visuelles car elles ne fournissent pas d'informations tactiles. Par ailleurs, les inégalités d'accès à ces représentations ne cessent de s'accroître ; grâce au développement de représentations graphiques dynamiques et disponibles en ligne, les personnes voyantes peuvent non seulement accéder à de grandes quantités de données, mais aussi interagir avec ces données par le biais de fonctionnalités avancées (changement d'échelle, sélection des données à afficher, etc.). En revanche, pour les personnes déficientes visuelles, les techniques actuellement utilisées pour rendre accessibles les cartes et les diagrammes nécessitent l'intervention de spécialistes et ne permettent pas la création de représentations interactives. Cependant, les récentes avancées dans le domaine de l'adaptation automatique de contenus laissent entrevoir, dans les prochaines années, une augmentation de la quantité de contenus adaptés. Cette augmentation doit aller de pair avec le développement de dispositifs utilisables et abordables en mesure de supporter l'affichage de représentations interactives et rapidement modifiables, tout en étant accessibles aux personnes déficientes visuelles. Certains prototypes de recherche s'appuient sur une représentation numérique seulement : ils peuvent être instantanément modifiés mais ne fournissent que très peu de retour tactile, ce qui rend leur exploration complexe d'un point de vue cognitif et impose de fortes contraintes sur le contenu. D'autres prototypes s'appuient sur une représentation numérique et physique : bien qu'ils puissent être explorés tactilement, ce qui est un réel avantage, ils nécessitent un support tactile qui empêche toute modification rapide. Quant aux dispositifs similaires à des tablettes Braille, mais avec des milliers de picots, leur coût est prohibitif. L'objectif de cette thèse est de pallier les limitations de ces approches en étudiant comment développer des cartes et diagrammes interactifs physiques, modifiables et abordables. Pour cela, nous nous appuyons sur un type d'interface qui a rarement été étudié pour des utilisateurs déficients visuels : les interfaces tangibles, et plus particulièrement les interfaces tangibles sur table. Dans ces interfaces, des objets physiques représentent des informations numériques et peuvent être manipulés par l'utilisateur pour interagir avec le système, ou par le système lui-même pour refléter un changement du modèle numérique - on parle alors d'interfaces tangibles sur tables animées, ou actuated. Grâce à la conception, au développement et à l'évaluation de trois interfaces tangibles sur table (les Tangible Reels, la Tangible Box et BotMap), nous proposons un ensemble de solutions techniques répondant aux spécificités des interfaces tangibles pour des personnes déficientes visuelles, ainsi que de nouvelles techniques d'interaction non-visuelles, notamment pour la reconstruction d'une carte ou d'un diagramme et l'exploration de cartes de type " Pan & Zoom ". D'un point de vue théorique, nous proposons aussi une nouvelle classification pour les dispositifs interactifs accessibles.Despite their omnipresence and essential role in our everyday lives, online and printed graphical representations are inaccessible to visually impaired people because they cannot be explored using the sense of touch. The gap between sighted and visually impaired people's access to graphical representations is constantly growing due to the increasing development and availability of online and dynamic representations that not only give sighted people the opportunity to access large amounts of data, but also to interact with them using advanced functionalities such as panning, zooming and filtering. In contrast, the techniques currently used to make maps and diagrams accessible to visually impaired people require the intervention of tactile graphics specialists and result in non-interactive tactile representations. However, based on recent advances in the automatic production of content, we can expect in the coming years a growth in the availability of adapted content, which must go hand-in-hand with the development of affordable and usable devices. In particular, these devices should make full use of visually impaired users' perceptual capacities and support the display of interactive and updatable representations. A number of research prototypes have already been developed. Some rely on digital representation only, and although they have the great advantage of being instantly updatable, they provide very limited tactile feedback, which makes their exploration cognitively demanding and imposes heavy restrictions on content. On the other hand, most prototypes that rely on digital and physical representations allow for a two-handed exploration that is both natural and efficient at retrieving and encoding spatial information, but they are physically limited by the use of a tactile overlay, making them impossible to update. Other alternatives are either extremely expensive (e.g. braille tablets) or offer a slow and limited way to update the representation (e.g. maps that are 3D-printed based on users' inputs). In this thesis, we propose to bridge the gap between these two approaches by investigating how to develop physical interactive maps and diagrams that support two-handed exploration, while at the same time being updatable and affordable. To do so, we build on previous research on Tangible User Interfaces (TUI) and particularly on (actuated) tabletop TUIs, two fields of research that have surprisingly received very little interest concerning visually impaired users. Based on the design, implementation and evaluation of three tabletop TUIs (the Tangible Reels, the Tangible Box and BotMap), we propose innovative non-visual interaction techniques and technical solutions that will hopefully serve as a basis for the design of future TUIs for visually impaired users, and encourage their development and use. We investigate how tangible maps and diagrams can support various tasks, ranging from the (re)construction of diagrams to the exploration of maps by panning and zooming. From a theoretical perspective we contribute to the research on accessible graphical representations by highlighting how research on maps can feed research on diagrams and vice-versa. We also propose a classification and comparison of existing prototypes to deliver a structured overview of current research
    corecore