1,886 research outputs found

    "Hey Model!" -- Natural User Interactions and Agency in Accessible Interactive 3D Models

    Full text link
    While developments in 3D printing have opened up opportunities for improved access to graphical information for people who are blind or have low vision (BLV), they can provide only limited detailed and contextual information. Interactive 3D printed models (I3Ms) that provide audio labels and/or a conversational agent interface potentially overcome this limitation. We conducted a Wizard-of-Oz exploratory study to uncover the multi-modal interaction techniques that BLV people would like to use when exploring I3Ms, and investigated their attitudes towards different levels of model agency. These findings informed the creation of an I3M prototype of the solar system. A second user study with this model revealed a hierarchy of interaction, with BLV users preferring tactile exploration, followed by touch gestures to trigger audio labels, and then natural language to fill in knowledge gaps and confirm understanding.Comment: Paper presented at ACM CHI 2020: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ACM, New York, April 2020; Replacement: typos correcte

    Principles and Guidelines for Advancement of Touchscreen-Based Non-visual Access to 2D Spatial Information

    Get PDF
    Graphical materials such as graphs and maps are often inaccessible to millions of blind and visually-impaired (BVI) people, which negatively impacts their educational prospects, ability to travel, and vocational opportunities. To address this longstanding issue, a three-phase research program was conducted that builds on and extends previous work establishing touchscreen-based haptic cuing as a viable alternative for conveying digital graphics to BVI users. Although promising, this approach poses unique challenges that can only be addressed by schematizing the underlying graphical information based on perceptual and spatio-cognitive characteristics pertinent to touchscreen-based haptic access. Towards this end, this dissertation empirically identified a set of design parameters and guidelines through a logical progression of seven experiments. Phase I investigated perceptual characteristics related to touchscreen-based graphical access using vibrotactile stimuli, with results establishing three core perceptual guidelines: (1) a minimum line width of 1mm should be maintained for accurate line-detection (Exp-1), (2) a minimum interline gap of 4mm should be used for accurate discrimination of parallel vibrotactile lines (Exp-2), and (3) a minimum angular separation of 4mm should be used for accurate discrimination of oriented vibrotactile lines (Exp-3). Building on these parameters, Phase II studied the core spatio-cognitive characteristics pertinent to touchscreen-based non-visual learning of graphical information, with results leading to the specification of three design guidelines: (1) a minimum width of 4mm should be used for supporting tasks that require tracing of vibrotactile lines and judging their orientation (Exp-4), (2) a minimum width of 4mm should be maintained for accurate line tracing and learning of complex spatial path patterns (Exp-5), and (3) vibrotactile feedback should be used as a guiding cue to support the most accurate line tracing performance (Exp-6). Finally, Phase III demonstrated that schematizing line-based maps based on these design guidelines leads to development of an accurate cognitive map. Results from Experiment-7 provide theoretical evidence in support of learning from vision and touch as leading to the development of functionally equivalent amodal spatial representations in memory. Findings from all seven experiments contribute to new theories of haptic information processing that can guide the development of new touchscreen-based non-visual graphical access solutions

    Computer Entertainment Technologies for the Visually Impaired: An Overview

    Get PDF
    Over the last years, works related to accessible technologies have increased both in number and in quality. This work presents a series of articles which explore different trends in the field of accessible video games for the blind or visually impaired. Reviewed articles are distributed in four categories covering the following subjects: (1) video game design and architecture, (2) video game adaptations, (3) accessible games as learning tools or treatments and (4) navigation and interaction in virtual environments. Current trends in accessible game design are also analysed, and data is presented regarding keyword use and thematic evolution over time. As a conclusion, a relative stagnation in the field of human-computer interaction for the blind is detected. However, as the video game industry is becoming increasingly interested in accessibility, new research opportunities are starting to appear

    Future bathroom: A study of user-centred design principles affecting usability, safety and satisfaction in bathrooms for people living with disabilities

    Get PDF
    Research and development work relating to assistive technology 2010-11 (Department of Health) Presented to Parliament pursuant to Section 22 of the Chronically Sick and Disabled Persons Act 197

    CLUE: A Usability Evaluation Checklist for Multimodal Video Game Field Studies with Children Who Are Blind

    Get PDF
    Multimodal video games can enhance the cognitive skills of children who are blind by allowing interaction with scenarios that would be unfeasible in their everyday life. To assist the identification of relevant interface and interaction issues when children who are blind are playing multimodal video games, we propose a Checklist for Usability Evaluation of Multimodal Games for Children who are Blind (CLUE). CLUE was designed to assist researchers and practitioners in usability evaluation field studies, addressing multiple aspects of gameplay and multimodality, including audio, graphics, and haptics. Overall, initial evidence indicates that the use of CLUE during user observation helps to raise a greater number of relevant usability issues than other methods, such as interview and questionnaire. CLUE makes the analysis of recorded user interactions a less time- and effort-consuming process by guiding the identification of interaction patterns and usability issues

    Emerging issues and current trends in assistive technology use 2007-1010: practising, assisting and enabling learning for all

    Get PDF
    Following an earlier review in 2007, a further review of the academic literature relating to the uses of assistive technology (AT) by children and young people was completed, covering the period 2007-2011. As in the earlier review, a tripartite taxonomy: technology uses to train or practise, technology uses to assist learning and technology uses to enable learning, was used in order to structure the findings. The key markers for research in this field and during these three years were user involvement, AT on mobile mainstream devices, the visibility of AT, technology for interaction and collaboration, new and developing interfaces and inclusive design principles. The paper concludes by locating these developments within the broader framework of the Digital Divide

    Human-powered smartphone assistance for blind people

    Get PDF
    Mobile devices are fundamental tools for inclusion and independence. Yet, there are still many open research issues in smartphone accessibility for blind people (Grussenmeyer and Folmer 2017). Currently, learning how to use a smartphone is non-trivial, especially when we consider that the need to learn new apps and accommodate to updates never ceases. When first transitioning from a basic feature-phone, people have to adapt to new paradigms of interaction. Where feature phones had a finite set of applications and functions, users can extend the possible functions and uses of a smartphone by installing new 3rd party applications. Moreover, the interconnectivity of these applications means that users can explore a seemingly endless set of workflows across applications. To that end, the fragmented nature of development on these devices results in users needing to create different mental models for each application. These characteristics make smartphone adoption a demanding task, as we found from our eight-week longitudinal study on smartphone adoption by blind people. We conducted multiple studies to characterize the smartphone challenges that blind people face, and found people often require synchronous, co-located assistance from family, peers, friends, and even strangers to overcome the different barriers they face. However, help is not always available, especially when we consider the disparity in each barrier, individual support network and current location. In this dissertation we investigated if and how in-context human-powered solutions can be leveraged to improve current smartphone accessibility and ease of use. Building on a comprehensive knowledge of the smartphone challenges faced and coping mechanisms employed by blind people, we explored how human-powered assistive technologies can facilitate use. The thesis of this dissertation is: Human-powered smartphone assistance by non-experts is effective and impacts perceptions of self-efficacy
    corecore