212 research outputs found

    Repurposing Visual Input Modalities for Blind Users: A Case Study of Word Processors

    Get PDF
    Visual \u27point-and-click\u27 interaction artifacts such as mouse and touchpad are tangible input modalities, which are essential for sighted users to conveniently interact with computer applications. In contrast, blind users are unable to leverage these visual input modalities and are thus limited while interacting with computers using a sequentially narrating screen-reader assistive technology that is coupled to keyboards. As a consequence, blind users generally require significantly more time and effort to do even simple application tasks (e.g., applying a style to text in a word processor) using only keyboard, compared to their sighted peers who can effortlessly accomplish the same tasks using a point-and-click mouse. This paper explores the idea of repurposing visual input modalities for non-visual interaction so that blind users too can draw the benefits of simple and efficient access from these modalities. Specifically, with word processing applications as the representative case study, we designed and developed NVMouse as a concrete manifestation of this repurposing idea, in which the spatially distributed word-processor controls are mapped to a virtual hierarchical \u27Feature Menu\u27 that is easily traversable non-visually using simple scroll and click input actions. Furthermore, NVMouse enhances the efficiency of accessing frequently-used application commands by leveraging a data-driven prediction model that can determine what commands the user will most likely access next, given the current \u27local\u27 screen-reader context in the document. A user study with 14 blind participants comparing keyboard-based screen readers with NVMouse, showed that the latter significantly reduced both the task-completion times and user effort (i.e., number of user actions) for different word-processing activities

    TACTOPI: a playful approach to promote computational thinking to visually impaired children

    Get PDF
    Tese de mestrado em InformĂĄtica, Faculdade de CiĂȘncias, Universidade de Lisboa, 2021The use of playful activities is common in introductory programming settings. Visually, these activities tend to be stimulating enough. However, these are not accessible for visually impaired children. This work presents TACTOPI - a system that consists of a tangible environment that provides navigation skills training and enriches sensorial experiences using sound, visual and tactile elements; It allows the learning of introductory concepts of computational thinking embedded in playful activities with storytelling that promote environmental education for children with visual impairments from 4 to 7 years old. The map is modular, customizable and has a docking system to place the elements allowing a fun tactile interaction. Another essential element is the 3D printed helm containing a joystick and buttons for the child to control and pre-program the instructions to be played by the robot. A study was carried out using a qualitative questionnaire to evaluate the system. Suggestions were collected from respondents experienced with blind children about the suitability, relevance and accessibility of this system for these children. From the results, it is possible to conclude that this is a tool that, despite some limitations, is efficient to introduce computational thinking; interactive elements that support activities in other disciplines and contexts; a tool that ensures accessibility and supports task training for the development of blind children

    Video Augmentation in Education: in-context support for learners through prerequisite graphs

    Get PDF
    The field of education is experiencing a massive digitisation process that has been ongoing for the past decade. The role played by distance learning and Video-Based Learning, which is even more reinforced by the pandemic crisis, has become an established reality. However, the typical features of video consumption, such as sequential viewing and viewing time proportional to duration, often lead to sub-optimal conditions for the use of video lessons in the process of acquisition, retrieval and consolidation of learning contents. Video augmentation can prove to be an effective support to learners, allowing a more flexible exploration of contents, a better understanding of concepts and relationships between concepts and an optimization of time required for video consumption at different stages of the learning process. This thesis focuses therefore on the study of methods for: 1) enhancing video capabilities through video augmentation features; 2) extracting concept and relationships from video materials; 3) developing intelligent user interfaces based on the knowledge extracted. The main research goal is to understand to what extent video augmentation can improve the learning experience. This research goal inspired the design of EDURELL Framework, within which two applications were developed to enable the testing of augmented methods and their provision. The novelty of this work lies in using the knowledge within the video, without exploiting external materials, to exploit its educational potential. The enhancement of the user interface takes place through various support features among which in particular a map that progressively highlights the prerequisite relationships between the concepts as they are explained, i.e., following the advancement of the video. The proposed approach has been designed following a user-centered iterative approach and the results in terms of effect and impact on video comprehension and learning experience make a contribution to the research in this field

    A Haptic Study to Inclusively Aid Teaching and Learning in the Discipline of Design

    Get PDF
    Designers are known to use a blend of manual and virtual processes to produce design prototype solutions. For modern designers, computer-aided design (CAD) tools are an essential requirement to begin to develop design concept solutions. CAD, together with augmented reality (AR) systems have altered the face of design practice, as witnessed by the way a designer can now change a 3D concept shape, form, color, pattern, and texture of a product by the click of a button in minutes, rather than the classic approach to labor on a physical model in the studio for hours. However, often CAD can limit a designer’s experience of being ‘hands-on’ with materials and processes. The rise of machine haptic1 (MH) tools have afforded a great potential for designers to feel more ‘hands-on’ with the virtual modeling processes. Through the use of MH, product designers are able to control, virtually sculpt, and manipulate virtual 3D objects on-screen. Design practitioners are well placed to make use of haptics, to augment 3D concept creation which is traditionally a highly tactile process. For similar reasoning, it could also be said that, non-sighted and visually impaired (NS, VI) communities could also benefit from using MH tools to increase touch-based interactions, thereby creating better access for NS, VI designers. In spite of this the use of MH within the design industry (specifically product design), or for use by the non-sighted community is still in its infancy. Therefore the full benefit of haptics to aid non-sighted designers has not yet been fully realised. This thesis empirically investigates the use of multimodal MH as a step closer to improving the virtual hands-on process, for the benefit of NS, VI and fully sighted (FS) Designer-Makers. This thesis comprises four experiments, embedded within four case studies (CS1-4). Case study 1and2 worked with self-employed NS, VI Art Makers at Henshaws College for the Blind and Visual Impaired. The study examined the effects of haptics on NS, VI users, evaluations of experience. Case study 3 and4, featuring experiments 3 and4, have been designed to examine the effects of haptics on distance learning design students at the Open University. The empirical results from all four case studies showed that NS, VI users were able to navigate and perceive virtual objects via the force from the haptically rendered objects on-screen. Moreover, they were assisted by the whole multimodal MH assistance, which in CS2 appeared to offer better assistance to NS versus FS participants. In CS3 and 4 MH and multimodal assistance afforded equal assistance to NS, VI, and FS, but haptics were not as successful in bettering the time results recorded in manual (M) haptic conditions. However, the collision data between M and MH showed little statistical difference. The thesis showed that multimodal MH systems, specifically used in kinesthetic mode have enabled human (non-disabled and disabled) to credibly judge objects within the virtual realm. It also shows that multimodal augmented tooling can improve the interaction and afford better access to the graphical user interface for a wider body of users

    Interactive maps for visually impaired people : design, usability and spatial cognition

    Get PDF
    Connaßtre la géographie de son environnement urbain est un enjeu important pour les personnes déficientes visuelles. Des cartes tactiles en relief sont généralement utilisées mais elles présentent des limitations importantes (nombre limité d'informations, recours à une légende braille). Les nouvelles technologies permettent d'envisager des solutions innovantes. Nous avons conçu et développé une carte interactive accessible, en suivant un processus de conception participative. Cette carte est basée sur un dispositif multi-touch, une carte tactile en relief et une sortie sonore. Ce dispositif permet au sujet de recueillir des informations en double-cliquant sur certains objets de la carte. Nous avons démontré expérimentalement que ce prototype était plus efficace et plus satisfaisant pour des utilisateurs déficients visuels qu'une carte tactile simple. Nous avons également exploré et testé différents types d'interactions avancées accessibles pour explorer la carte. Cette thÚse démontre l'importance des cartes tactiles interactives pour les déficients visuels et leur cognition spatiale.Knowing the geography of an urban environment is crucial for visually impaired people. Tactile relief maps are generally used, but they retain significant limitations (limited amount of information, use of braille legend, etc.). Recent technological progress allows the development of innovative solutions which overcome these limitations. In this thesis, we present the design of an accessible interactive map through a participatory design process. This map is composed by a multi-touch screen with tactile map overlay and speech output. It provides auditory information when tapping on map elements. We have demonstrated in an experiment that our prototype was more effective and satisfactory for visually impaired users than a simple raised-line map. We also explored and tested different types of advanced non-visual interaction for exploring the map. This thesis demonstrates the importance of interactive tactile maps for visually impaired people and their spatial cognition

    Proceedings of the 8th international conference on disability, virtual reality and associated technologies (ICDVRAT 2010)

    Get PDF
    The proceedings of the conferenc

    The design and evaluation of non-visual information systems for blind users

    Get PDF
    This research was motivated by the sudden increase of hypermedia information (such as that found on CD-ROMs and on the World Wide Web), which was not initially accessible to blind people, although offered significant advantages over traditional braille and audiotape information. Existing non-visual information systems for blind people had very different designs and functionality, but none of them provided what was required according to user requirements studies: an easy-to-use non-visual interface to hypermedia material with a range of input devices for blind students. Furthermore, there was no single suitable design and evaluation methodology which could be used for the development of non-visual information systems. The aims of this research were therefore: (1) to develop a generic, iterative design and evaluation methodology consisting of a number of techniques suitable for formative evaluation of non-visual interfaces; (2) to explore non-visual interaction possibilities for a multimodal hypermedia browser for blind students based on user requirements; and (3) to apply the evaluation methodology to non-visual information systems at different stages of their development. The methodology developed and recommended consists of a range of complementary design and evaluation techniques, and successfully allowed the systematic development of prototype non-visual interfaces for blind users by identifying usability problems and developing solutions. Three prototype interfaces are described: the design and evaluation of two versions of a hypermedia browser; and an evaluation of a digital talking book. Recommendations made from the evaluations for an effective non-visual interface include the provision of a consistent multimodal interface, non-speech sounds for information and feedback, a range of simple and consistent commands for reading, navigation, orientation and output control, and support features. This research will inform developers of similar systems for blind users, and in addition, the methodology and design ideas are considered sufficiently generic, but also sufficiently detailed, that the findings could be applied successfully to the development of non-visual interfaces of any type

    Proceedings of the 2nd European conference on disability, virtual reality and associated technologies (ECDVRAT 1998)

    Get PDF
    The proceedings of the conferenc

    Visual Impairment and Blindness

    Get PDF
    Blindness and vision impairment affect at least 2.2 billion people worldwide with most individuals having a preventable vision impairment. The majority of people with vision impairment are older than 50 years, however, vision loss can affect people of all ages. Reduced eyesight can have major and long-lasting effects on all aspects of life, including daily personal activities, interacting with the community, school and work opportunities, and the ability to access public services. This book provides an overview of the effects of blindness and visual impairment in the context of the most common causes of blindness in older adults as well as children, including retinal disorders, cataracts, glaucoma, and macular or corneal degeneration

    Human-Computer Interaction

    Get PDF
    In this book the reader will find a collection of 31 papers presenting different facets of Human Computer Interaction, the result of research projects and experiments as well as new approaches to design user interfaces. The book is organized according to the following main topics in a sequential order: new interaction paradigms, multimodality, usability studies on several interaction mechanisms, human factors, universal design and development methodologies and tools
    • 

    corecore