Perceptual abilities predict individual differences in audiovisual benefit for phonemes, words and sentences

Abstract

Individuals differ substantially in the benefit they can obtain from visual cues during speech perception. Here, 113 normally-hearing participants between ages 18 and 60 completed a three-part experiment investigating the reliability and predictors of individual audiovisual benefit for acoustically degraded speech. Audiovisual benefit was calculated as the relative intelligibility (at the individual-level) of approximately matched (at the group-level) auditory-only and audiovisual speech for materials at three levels of linguistic structure: meaningful sentences, monosyllabic words, and consonants in minimal syllables. This measure of audiovisual benefit was stable across sessions and materials, suggesting that a shared mechanism of audiovisual integration operates across levels of linguistic structure. Information transmission analyses suggested that this may be related to simple phonetic cue extraction: sentence-level audiovisual benefit was reliably predicted by the relative ability to discriminate place of articulation at the consonant-level. Finally, while unimodal speech perception was related to cognitive measures (matrix reasoning, vocabulary) and demographics (age, gender), audiovisual benefit was predicted uniquely by unimodal perceptual abilities: Better lipreading ability and subclinically poorer hearing (speech reception thresholds) independently predicted enhanced audiovisual benefit. This work has implications for best practices in quantifying audiovisual benefit and research identifying strategies to enhance multimodal communication in hearing loss

    Similar works

    Full text

    thumbnail-image

    Available Versions