unknown

Audiovisual Integration Under Different Conditions of Hearing Loss

Abstract

In any listening environment, normal or compromised, humans integrate the auditory and visual cues provided, in comprehending speech. One unresolved question is how different forms of hearing loss differentially impact the integration process. The present study investigated how degradation of the auditory signal due to two types of hearing loss inhibited a listener’s ability to integrate. Ten adult listeners, with normal or corrected-to-normal vision and auditory thresholds at or better than 25 dB HL across all frequencies, were presented with everyday sentences produced by four different talkers from the HeLPs software by Sensimetrics, Inc. Each sentence was presented in audio-only, visual-only, and audio+visual modalities. Auditory input simulated a sloping hearing loss (55 dB HL at 1000 Hz) and the stimulus presented by an 8-channel cochlear implant. Results of testing suggest that sentences presented in the cochlear implant condition were more intelligible, while sentences in the sloping condition showed the greatest audio-visual integration. These findings raise a question about the fidelity of the cochlear implant simulation in the software, given that such a result is not likely in real-world situations. Results of the present study may have implications for development of speech-reading and aural rehabilitation programs in the future.The Ohio State University Undergraduate College of Social and Behavioral Sciences Research GrantNo embargoAcademic Major: Speech and Hearing Scienc

    Similar works