24 research outputs found

    Interdependence as a Frame for Assistive Technology Research and Design

    Get PDF
    In this paper, we describe interdependence for assistive technology design, a frame developed to complement the traditional focus on independence in the Assistive Technology field. Interdependence emphasizes collaborative access and people with disabilities' important and often understated contribution in these efforts. We lay the foundation of this frame with literature from the academic discipline of Disability Studies and popular media contributed by contemporary disability justice activists. Then, drawing on cases from our own work, we show how the interdependence frame (1) synthesizes findings from a growing body of research in the Assistive Technology field and (2) helps us orient to additional technology design opportunities. We position interdependence as one possible orientation to, not a prescription for, research and design practice--one that opens new design possibilities and affirms our commitment to equal access for people with disabilities

    Pedestrian Detection with Wearable Cameras for the Blind: A Two-way Perspective

    Full text link
    Blind people have limited access to information about their surroundings, which is important for ensuring one's safety, managing social interactions, and identifying approaching pedestrians. With advances in computer vision, wearable cameras can provide equitable access to such information. However, the always-on nature of these assistive technologies poses privacy concerns for parties that may get recorded. We explore this tension from both perspectives, those of sighted passersby and blind users, taking into account camera visibility, in-person versus remote experience, and extracted visual information. We conduct two studies: an online survey with MTurkers (N=206) and an in-person experience study between pairs of blind (N=10) and sighted (N=40) participants, where blind participants wear a working prototype for pedestrian detection and pass by sighted participants. Our results suggest that both of the perspectives of users and bystanders and the several factors mentioned above need to be carefully considered to mitigate potential social tensions.Comment: The 2020 ACM CHI Conference on Human Factors in Computing Systems (CHI 2020

    The Emerging Professional Practice of Remote Sighted Assistance for People with Visual Impairments

    Get PDF
    People with visual impairments (PVI) must interact with a world they cannot see. Remote sighted assistance (RSA) has emerged as a conversational assistive technology. We interviewed RSA assistants ( agents ) who provide assistance to PVI via a conversational prosthetic called Aira (https://aira.io/) to understand their professional practice. We identified four types of support provided: scene description, navigation, task performance, and social engagement. We discovered that RSA provides an opportunity for PVI to appropriate the system as a richer conversational/social support tool. We studied and identified patterns in how agents provide assistance and how they interact with PVI as well as the challenges and strategies associated with each context. We found that conversational interaction is highly context-dependent. We also discuss implications for design

    OMap: An assistive solution for identifying and localizing objects in a semi-structured environment

    Get PDF
    A system capable of detection and localization of objects of interest in a semi-structured environment will enhance the quality of life of people who are blind or visually impaired. Towards building such a system, this thesis presents a personalized real-time system called O\u27Map that finds misplaced/moved personal items and localizes them with respect to known landmarks. First, we adopted a participatory design approach to identify users’ need and functionalities of the system. Second, we used the concept from system thinking and design thinking to develop a real-time object recognition engine that was optimized to run on low form factor devices. The object recognition engine finds robust correspondences between the query image and item templates using K-D tree of invariant feature descriptor with two nearest neighbors and ratio test. Quantitative evaluation demonstrates that O\u27Map identifies object of interest with an average F-measure of 0.9650
    corecore