2 research outputs found
The role of infant-directed speech in language development of infants with hearing loss
It is estimated that approximately two out of every 1000 infants worldwide are born with unilateral or bilateral hearing loss (HL). Congenital HL, which refers to HL present at birth, has major negative effects on infants’ speech and language acquisition. Although such negative effects can be mediated by early access to hearing devices and intervention, the majority of children with HL have delayed language development in comparison with their normal-hearing (NH) peers. The aim of this thesis was to provide a deeper empirical understanding of the acoustic features in infant-directed speech (IDS) to infants with HL compared to infants with NH of the same chronological and the same hearing age. The three specific objectives were set for this thesis. The first objective is to investigate the effects of HL and the degree of hearing experience on the acoustic features of IDS. The second objective is to assess adjustments in IDS features across development in IDS to infants with HL, as they acquire more hearing experience. The third objective is to evaluate the role of specific IDS components such as vowel hyperarticulation and exaggerated prosody in lexical processing in infants with NH from six to 18 months of age, at both neural and behavioural levels. This was achieved by conducting four experiments. The first experiment used a cross-sectional design that assessed the acoustic features in IDS to infants with HL with a specific focus on whether and how infants’ chronological age and hearing age may affect these features. Experiment 2 included a longitudinal investigation that focused on the acoustic features of IDS to infants with HL and infants with NH of the same hearing age. We sought to identify how infants’ changing linguistic needs may shape maternal IDS across development. Experiments 3 and 4 focused on lexical processing in six-, 10-, and 18-month-old infants, whereby we aimed to identify the role of specific IDS features in facilitating lexical processing in infants with NH at different stages of language acquisition. The results of this thesis demonstrated that mothers adjust their IDS to infants with HL in a similar manner as in IDS to infants with NH. However, some differences are evident in the production of the corner vowels /i/ and /u/. These differences exist even when controlling for the amount of hearing experience had by infants with HL. Additionally, findings demonstrated a relation between vowel production in IDS and infants’ receptive vocabulary indicating that the exaggeration in vowel production in maternal IDS may play a fostering role in infants’ language acquisition. This linguistic role was confirmed as vowel hyperarticulation was also found to facilitate lexical processing at the neural level in 10-month-old infants. However, with regard to older infants (18 months), our findings demonstrated that natural IDS with heightened pitch and vowel hyperarticulation represents the richest input that facilitates infants’ speech processing. In summary, the findings of this thesis suggest that congenital HL in infants affects maternal production of vowels in IDS resulting in less clear vowel categories. This may result from mothers adjusting their vowel production according to infants’ reduced vowel discrimination abilities, thus, adjusting their IDS to infants’ linguistic competence. Additionally, receptive vocabulary seems not to be affected by this, indicating the role of other cues for building a lexicon in infants with HL that warrant further investigation. Furthermore, the findings suggest that pitch and vowel hyperarticulation in IDS play significant roles in facilitating lexical processing in the first two years of life
Recommended from our members
The effects of speaking style, noise, and semantic context on speech segmentation : evidence from artificial language learning and eye-tracking experiments
This dissertation reports a series of three experimental studies that investigated how variation in speech clarity and intelligibility as well as the presence of semantic context affects listeners’ speech segmentation. An artificial language learning experiment in Study 1 (Chapter 2) showed that speaking clearly relative to speaking conversationally improved segmentation of nonsense words by statistical learning. However, the improvement was observed only in the quiet listening condition but not in noise. Using the visual-world eye-tracking paradigm, Study 2 (Chapter 3) examined the clear speech segmentation benefit during real-time processing of meaningful sentences in which the target word was temporarily ambiguous with a competitor across a word boundary. The results revealed that that relative to conversational speech, clear speech facilitated target word segmentation even before the target and competitor could be disambiguated based on phonemic information. The facilitation not only emerged in quiet but also extended to the noisy listening condition. Finally, built upon Study 2, Study 3 (Chapter 4) employed eye-tracking to further explore how such clear speech facilitation effect is modulated by semantic cues from the preceding context. It was found that while the clear speech segmentation benefit was eliminated when the context already biased listeners towards the target, it was still present when the context favored the unintended competitor. Taken together, the key findings from the three studies advance the understanding of the relative importance of signal-dependent and signal-independent sources of information during segmentation in realistic communicative settings. The results also provide novel insight into the well-documented clear speech processing benefits by demonstrating that improved segmentation may in part underlie these benefits. The dissertation also has theoretical implications, suggesting directions for refining the current spoken word recognition and segmentation models.Linguistic