More than meets the ear: Investigating the impact of audiovisual speech cues and supportive sentence context during bilingual speech perception in noise in younger and older adults

Abstract

This dissertation investigates speech perception in noise in a first (L1) and second (L2) language, in both younger and older adults. The studies presented here used a sentence perception in noise task during which participants were asked to identify sentence terminal words across three different modalities (visual-only, auditory-only, audiovisual) and two levels of context (low context, moderate context). Furthermore, baseline perceptual load (i.e., L1, auditory-only, low context) was equated across participants. Manuscript 1 investigated speech perception in noise in young adult English-French/French-English bilinguals. The results suggest that: 1) young adult bilinguals benefit from visual speech cues and sentence context in both of their languages, 2) the benefit of visual speech cues is proportionally greater in L2 compared to L1, and 3) there is a complex interplay between L2 experience and L2 performance during speech perception in noise. Manuscript 2 contrasted bilingual speech perception in noise in younger and older English-French/French-English bilinguals. There were no age differences in performance, except for poorer lip-reading in older adults. Therefore, when baseline perceptual load is equated, younger and older bilinguals benefit from visual speech cues and sentence context to the same extent. Contrary to younger adults, older adults showed similar benefit from visual speech cues in L1 and L2, suggesting that bilinguals’ reliance on visual speech cues may decrease as they accumulate experience in L2. Manuscript 3 examined bilinguals’ tendency to look at the speaker’s nose/mouth around the onset of sentence terminal words during speech perception in noise. Individuals looked at the speaker’s mouth more during audiovisual speech compared to auditory-only speech (i.e., when presented with a still face), but generally no age or language differences were observed. However, the proportion of time spent looking in L1 and L2 were generally positively correlated, suggesting a role of individual preferences/style rather than language experience during L2 speech perception in noise. Overall, this dissertation contributes to the literature on the benefits of visual speech cues and supportive sentence context during speech perception in noise. Furthermore, the results constitute an early step in developing more comprehensive models of speech perception in noise which include diverse listeners

    Similar works