5,083 research outputs found

    Multimedia Interfaces for BSL Using Lip Readers

    Get PDF

    Augmented Reality Talking Heads as a Support for Speech Perception and Production

    Get PDF

    Working Effectively with Persons Who Are Hard of Hearing, Late-Deafened, or Deaf

    Get PDF
    This brochure on persons who are hard of hearing, late-deafened, or deaf and the Americans with Disability Act (ADA) is one of a series on human resources practices and workplace accommodations for persons with disabilities edited by Susanne M. Bruyère, Ph.D., CRC, SPHR, Director, Program on Employment and Disability, School of Industrial and Labor Relations – Extension Division, Cornell University. Cornell University was funded in the early 1990’s by the U.S. Department of Education National Institute on Disability and Rehabilitation Research as a National Materials Development Project on the employment provisions (Title I) of the ADA (Grant #H133D10155). These updates, and the development of new brochures, have been funded by Cornell’s Program on Employment and Disability, the Pacific Disability and Business Technical Assistance Center, and other supporters

    AudioViewer: Learning to Visualize Sounds

    Full text link
    A long-standing goal in the field of sensory substitution is to enable sound perception for deaf and hard of hearing (DHH) people by visualizing audio content. Different from existing models that translate to hand sign language, between speech and text, or text and images, we target immediate and low-level audio to video translation that applies to generic environment sounds as well as human speech. Since such a substitution is artificial, without labels for supervised learning, our core contribution is to build a mapping from audio to video that learns from unpaired examples via high-level constraints. For speech, we additionally disentangle content from style, such as gender and dialect. Qualitative and quantitative results, including a human study, demonstrate that our unpaired translation approach maintains important audio features in the generated video and that videos of faces and numbers are well suited for visualizing high-dimensional audio features that can be parsed by humans to match and distinguish between sounds and words. Code and models are available at https://chunjinsong.github.io/audioviewe

    Development and Implementation of the C-Print Speech-to-Text Support Service

    Get PDF
    In this chapter we provide an overview of the growth of this system from an idea to a system that hundreds of deaf and hard of hearing students depend on everyday for communication access and learning. This chapter addresses the following questions regarding the development and implementation of C-Print. Why is there a need for the system? How does C-Print work? What have been the phases in creating the current system? What is the research evidence regarding its effectiveness and limitations? How might the system change in the future as new technologies emerge
    • …
    corecore