42 research outputs found

    Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss

    Get PDF
    OBJECTIVES: Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. METHODS: Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. RESULTS: Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/–B/aa) and more intact onset responses for nonword repetition (Baz for/–B/az). Thus visual speech altered both discrimination and identification in the CHL—to a large extent for the /B/ onsets but only minimally for the /G/ onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children’s discrimination skills (i.e., d’ analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets—even after variation due to the other variables was controlled. CONCLUSIONS: These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL

    Children perceive speech onsets by ear and eye*

    Get PDF
    Adults use vision to perceive low-fidelity speech; yet how children acquire this ability is not well understood. The literature indicates that children show reduced sensitivity to visual speech from kindergarten to adolescence. We hypothesized that this pattern reflects the effects of complex tasks and a growth period with harder-to-utilize cognitive resources, not lack of sensitivity. We investigated sensitivity to visual speech in children via the phonological priming produced by low-fidelity (non-intact onset) auditory speech presented audiovisually (see dynamic face articulate consonant/rhyme b/ag; hear non-intact onset/rhyme: −b/ag) vs. auditorily (see still face; hear exactly same auditory input). Audiovisual speech produced greater priming from four to fourteen years, indicating that visual speech filled in the non-intact auditory onsets. The influence of visual speech depended uniquely on phonology and speechreading. Children – like adults – perceive speech onsets multimodally. Findings are critical for incorporating visual speech into developmental theories of speech perception

    Phonological Priming in Children with Hearing Loss:Effect of Speech Mode, Fidelity, and Lexical Status

    Get PDF
    OBJECTIVES: Our research determined 1) how phonological priming of picture naming was affected by the mode (auditory-visual [AV] vs auditory), fidelity (intact vs non-intact auditory onsets), and lexical status (words vs nonwords) of speech stimuli in children with prelingual sensorineural hearing impairment (CHI) vs. children with normal hearing (CNH); and 2) how the degree of hearing impairment (HI), auditory word recognition, and age influenced results in CHI. Note that some of our AV stimuli were not the traditional bimodal input but instead they consisted of an intact consonant/rhyme in the visual track coupled to a non-intact onset/rhyme in the auditory track. Example stimuli for the word bag are: 1) AV: intact visual (b/ag) coupled to non-intact auditory (−b/ag) and 2) Auditory: static face coupled to the same non-intact auditory (−b/ag). Our question was whether the intact visual speech would “restore or fill-in” the non-intact auditory speech in which case performance for the same auditory stimulus would differ depending upon the presence/absence of visual speech. DESIGN: Participants were 62 CHI and 62 CNH whose ages had a group-mean and -distribution akin to that in the CHI group. Ages ranged from 4 to 14 years. All participants met the following criteria: 1) spoke English as a native language, 2) communicated successfully aurally/orally, and 3) had no diagnosed or suspected disabilities other than HI and its accompanying verbal problems. The phonological priming of picture naming was assessed with the multi-modal picture word task. RESULTS: Both CHI and CNH showed greater phonological priming from high than low fidelity stimuli and from AV than auditory speech. These overall fidelity and mode effects did not differ in the CHI vs. CNH—thus these CHI appeared to have sufficiently well specified phonological onset representations to support priming and visual speech did not appear to be a disproportionately important source of the CHI’s phonological knowledge. Two exceptions occurred, however. First—with regard to lexical status—both the CHI and CNH showed significantly greater phonological priming from the nonwords than words, a pattern consistent with the prediction that children are more aware of phonetics-phonology content for nonwords. This overall pattern of similarity between the groups was qualified by the finding that CHI showed more nearly equal priming by the high vs. low fidelity nonwords than the CNH; in other words, the CHI were less affected by the fidelity of the auditory input for nonwords. Second, auditory word recognition—but not degree of HI or age—uniquely influenced phonological priming by the nonwords presented AV. CONCLUSIONS: With minor exceptions, phonological priming in CHI and CNH showed more similarities than differences. Importantly, we documented that the addition of visual speech significantly increased phonological priming in both groups. Clinically these data support intervention programs that view visual speech as a powerful asset for developing spoken language in CHI
    corecore