5,083 research outputs found
Working Effectively with Persons Who Are Hard of Hearing, Late-Deafened, or Deaf
This brochure on persons who are hard of hearing, late-deafened, or deaf and the Americans with Disability Act (ADA) is one of a series on human resources practices and workplace accommodations for persons with disabilities edited by Susanne M. Bruyère, Ph.D., CRC, SPHR, Director, Program on Employment and Disability, School of Industrial and Labor Relations – Extension Division, Cornell University. Cornell University was funded in the early 1990’s by the U.S. Department of Education National Institute on Disability and Rehabilitation Research as a National Materials Development Project on the employment provisions (Title I) of the ADA (Grant #H133D10155). These updates, and the development of new brochures, have been funded by Cornell’s Program on Employment and Disability, the Pacific Disability and Business Technical Assistance Center, and other supporters
Recommended from our members
Therapy for Auditory Processing Impairment in Aphasia: An evaluation of two approaches
Purpose: This study evaluated two forms of discrimination therapy for auditory processing impairment in aphasia. It aimed to determine whether therapy can improve speech perception and/or help participants use semantic information to compensate for their impairment. Changes in listening were also explored by recording the level of facilitation needed during therapy tasks. Finally the study examined the effect of therapy on an everyday listening activity: a telephone message task.
Method: The study employed a repeated measures design. Eight participants received 12 sessions each of phonological and semantic-phonological therapy. Both programmes used minimal pair judgement tasks, but the latter embedded such tasks within a meaningful context, so encouraged the strategic use of semantic information (semantic bootstrapping). Experimental measures of auditory discrimination and comprehension were administered twice before therapy, once after each programme, and again six weeks later. The telephone message task was also administered at each time point. Test data were subjected to both group and individual analyses. Records of progress during therapy (i.e changes in support needed to carry out therapy tasks) were completed during treatment and analysed across the group.
Results: Group analyses showed no significant changes in tests of word and nonword discrimination as a result of therapy. One comprehension task improved following therapy, but two did not. There was also no indication that therapy improved the discrimination of treated words, as assessed by a priming task. The facilitation scores indicated that participants needed less support during tasks as therapy progressed, possibly as a result of improved listening. There was a significant effect of time on the telephone message task. Across all tasks there were few individual gains.
Conclusions: The results offer little evidence that therapy improved participants’ discrimination or semantic bootstrapping skills. Some changes in listening may have occurred, as indicated by the facilitation scores. Reasons for the null findings are discussed
AudioViewer: Learning to Visualize Sounds
A long-standing goal in the field of sensory substitution is to enable sound
perception for deaf and hard of hearing (DHH) people by visualizing audio
content. Different from existing models that translate to hand sign language,
between speech and text, or text and images, we target immediate and low-level
audio to video translation that applies to generic environment sounds as well
as human speech. Since such a substitution is artificial, without labels for
supervised learning, our core contribution is to build a mapping from audio to
video that learns from unpaired examples via high-level constraints. For
speech, we additionally disentangle content from style, such as gender and
dialect. Qualitative and quantitative results, including a human study,
demonstrate that our unpaired translation approach maintains important audio
features in the generated video and that videos of faces and numbers are well
suited for visualizing high-dimensional audio features that can be parsed by
humans to match and distinguish between sounds and words. Code and models are
available at https://chunjinsong.github.io/audioviewe
Recommended from our members
A study of applications of microcomputer technology in special education in western Massachusetts schools.
The purpose of this study is to survey microcomputer applications in special education in Western Massachusetts Schools and, in particular, to assess the extent to which special education is moving beyond drill and practice software with special needs students. Data were collected from 185 special education teachers by a questionnaire and follow-up interviews from eleven special education teachers in Western Massachusetts. Results showed that computers and software are generally integrated in special education teachers\u27 curricula. They used the microcomputer as a compensatory tool to sharpen students\u27 mathematics skills, language arts and reading comprehension. Some special education teachers also used computers for language assessment, speech training, eye-hand coordination and communication. Apple computers were the most popular brand used in this study. Adaptive devices such as firmware cards, switches, and speech synthesizers were used to help special needs students access computers. Computer-assisted instruction, word processing and games were the most popular software used. Students worked on computers generally alone, or in a small group, or in combination; the amount of supervision required depended upon students\u27 functioning level and physical limitations. Most special education teachers did not teach and computer language; only a few teachers explored Logo or BASIC with their students. Special education teachers realized that the computer is a good tool to motivate students and to increase self-esteem and attention; they received some inservice training on computer uses, but complained that it was not enough to help their students. Factors making it difficult for special education teachers to use computers were: lack of appropriate software, teachers being behind the trend, not enough class time to use computers, and perceptions of computers as dehumanizing. The study concludes with recommendations for increasing special education teachers\u27 computer training via input from hardware and software experts, and for requiring special education teachers to take introductory computer courses such as Logo, BASIC programming, authoring language systems and software evaluation. Also, it recommends that school administrations give financial and technical support for such training in order to use microcomputers and related devices more effectively
Development and Implementation of the C-Print Speech-to-Text Support Service
In this chapter we provide an overview of the growth of this system from an idea to a system that hundreds of deaf and hard of hearing students depend on everyday for communication access and learning. This chapter addresses the following questions regarding the development and implementation of C-Print. Why is there a need for the system? How does C-Print work? What have been the phases in creating the current system? What is the research evidence regarding its effectiveness and limitations? How might the system change in the future as new technologies emerge
- …