285 research outputs found
Acoustic features of infant-directed speech to infants with hearing loss
Published Online: 03 December 2020This study investigated the effects of hearing loss and hearing experience on the acoustic features of infant-directed speech (IDS) to infants with hearing loss (HL) compared to controls with normal hearing (NH) matched by either chronological or hearing age (experiment 1) and across development in infants with hearing loss as well as the relation between IDS features and infants' developing lexical abilities (experiment 2). Both experiments included detailed acoustic analyses of mothers' productions of the three corner vowels /a, i, u/ and utterance-level pitch in IDS and in adult-directed speech. Experiment 1 demonstrated that IDS to infants with HL was acoustically more variable than IDS to hearing-age matched infants with NH. Experiment 2 yielded no changes in IDS features over development; however, the results did show a positive relationship between formant distances in mothers' speech and infants' concurrent receptive vocabulary size, as well as between vowel hyperarticulation and infants' expressive vocabulary. These findings suggest that despite infants' HL and thus diminished access to speech input, infants with HL are exposed to IDS with generally similar acoustic qualities as are infants with NH. However, some differences persist, indicating that infants with HL might receive less intelligible speech.This research was supported by HEARing Cooperative Research Centre Grant No. 82631, āThe Seeds of Language Development,ā to D.B. The second author's work is supported by the Basque Government through the BERC 2018-2021 program and by the Spanish Ministry of Science and Innovation through the Ramon y Cajal Research Fellowship PID2019-105528GA-I00. We would like to thank all the parents and infants for participating in the study; āThe Shepherd Centreā in Sydney and Wollongong; āHear and Sayā in Brisbane for their help in recruitment of participants with HL; and Benjawan Kasisopa, Maria Christou-Ergos, Hana Zjakic, and Scott O'Loughlin for their assistance with data collection
Infantsā Sensitivity to Lexical Tone and Word Stress in Their First Year: A Thai and English Cross- Language Study
Published online: 23 Aug 2021Non-tone language infantsā native language recognition is based first on
supra-segmental then segmental cues, but this trajectory is unknown for
tone-language infants. This study investigated non-tone (English) and tone
(Thai) language 6- to 10-month-old infantsā preference for English vs. Thai
one-syllable words (containing segmental and tone cues) and two-syllable
words (additionally containing stress cues). A preference for their native onesyllable
words was observed in each of the two groups of infants, but this
was not the case for two-syllable words where Thai-learning infants showed
no native-language preference. These findings indicate that as early as six
months of age, infants acquiring tone- and non-tone languages identify their
native language by relying solely on lexical tone cues, but tone language
infants no longer show successful identification of their native language
when two pitch-based cues co-occur in the signal.The first authorās work was supported by the European Unionās Horizon 2020 Marie Sklodowska-Curie individual Fellowships European Programme under Grant Agreement No 798908 Optimising IDS, and she receives support from the Basque Government through the BERC 2018ā2021 program, and from the Spanish Ministry of Science and Innovation through the Ramon y Cajal Research Fellowship, PID2019ā105528GA-I00
The Role of Paired Associate Learning in Acquiring Letter-Sound Correspondences: A Longitudinal Study of Children at Family Risk for Dyslexia
Published online: 08 Dec 2020Visual-verbal-paired associate learning (PAL) is strongly related to reading acquisition, possibly indexing a distinct cross-modal mechanism for learning letter-sound associations. We measured linguistic abilities (nonword repetition, vocabulary size) longitudinally at 3.5 and 4.0 years, and visual-verbal PAL and letter knowledge at 4.0 and 4.5 years, in pre-reading children either at family risk for dyslexia (N = 27) or not (N = 25). Only nonword repetition predicted individual differences in later letter-sound knowledge, and PAL did not make a cross-sectional nor a longitudinal contribution. The data show a continuous relationship between linguistic processing abilities and letter-sound learning, with no independent role for PAL.This research was supported by the Australian Research Council grant DP110105123, āThe Seeds of Literacyā, to the 2nd and 3rd authors. The first authorās work is supported by the Basque Government through the BERC 2018-2021 program and by the Spanish Ministry of Science and Innovation through the Ramon y Cajal Research Fellowship, PID2019-105528GA-I00
Delayed development of phonological constancy in toddlers at family risk for dyslexia
Available online 20 June 2019.Phonological constancy refers to infantsā ability to disregard variations in the phonetic realisation of speech sounds that do not indicate lexical
contrast, e.g., when listening to accented speech. In typically-developing infants, this ability develops between 15- and 19-months of age, coinciding
with the consolidation of infantsā native phonological competence and vocabulary growth. Here we investigated the developmental time course of
phonological constancy in infants at family risk for developmental dyslexia, using a longitudinal design. Developmental dyslexia is a disorder
affecting the acquisition of reading and spelling skills, and it also affects early auditory processing, speech perception, and lexical acquisition. Infants
at-risk and not at-risk for dyslexia, based on a family history of dyslexia, participated when they were 15-, 19-, and 26-months of age. Phonological
constancy was indexed by comparing at-risk and not at-risk infantsā ability to recognise familiar words in two preferential looking tasks: (1) a task
using words presented in their native accent, and (2) a task using words presented in a non-native accent. We expected a delay in phonological
constancy for the at-risk infants. As predicted, in the non-native accent task, not at-risk infants recognised familiar words by 19 months, but at-risk
infants did not. The control infants thus exhibited phonological constancy. By 26 months, at-risk toddlers did show successful word recognition in
the native accent task. However, for the non-native accent task at 26 months, neither at-risk nor control infants showed familiar word recognition.
These findings are discussed in terms of the impact of family risk for dyslexia on toddlersā consolidation of early phonological and lexical skills.This research was supported by the Australian Research Council grant DP110105123, āThe Seeds of Literacyā, to the 3rd and 2nd authors
A Tale of Two Features: Perception of Cantonese Lexical Tone and English Lexical Stress in Cantonese-English Bilinguals
Ā© 2015 Tong et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.This study investigated the similarities and differences in perception of Cantonese tones and English stress patterns by Cantonese-English bilingual children, adults, and English monolingual adults. All three groups were asked to discriminate pairs of syllables that minimally differed in either Cantonese tone or in English stress. Bilingual children's performance on tone perception was comparable to their performance on stress perception. By contrast, bilingual adults' performance on tone perception was lower than their performance on stress perception, and there was a similar pattern in English monolingual adults. Bilingual adults tended to perform better than English monolingual adults on both the tone and stress perception tests. A significant correlation between tone perception and stress perception performance was found in bilingual children but not in bilingual adults. All three groups showed lower accuracy in the high rising-low rising contrast than any of the other 14 Cantonese tone contrasts. The acoustic analyses revealed that average F0, F0 onset, and F0 major slope were the critical acoustic correlates of Cantonese tones, whereas multiple acoustic correlates were salient in English stress, including average F0, spectral balance, duration and intensity. We argue that participants' difficulty in perceiving high rising-low rising contrasts originated from the contrasts' similarities in F0 onset and average F0; indeed the difference between their major slopes was the only cue with which to distinguish them. Acoustic-perceptual correlation analyses showed that although the average F0 and F0 onset were associated with tone perception performance in all three groups, F0 major slope was only associated with tone perception in the bilingual adult group. These results support a dynamic interactive account of suprasegmental speech perception by emphasizing the positive prosodic transfer between Cantonese tone and English stress, and the role that level of bilingual language experience and age play in shaping suprasegmental speech perception.published_or_final_versio
Language specificity in cortical tracking of speech rhythm at the mora, syllable, and foot levels
Published: 05 August 2022Recent research shows that adultsā neural oscillations track the rhythm of the speech signal. However, the extent to which this tracking is driven by the acoustics of the signal, or by language-specific processing remains unknown. Here adult native listeners of three rhythmically different languages (English, French, Japanese) were compared on their cortical tracking of speech envelopes synthesized in their three native languages, which allowed for coding at each of the three languageās dominant rhythmic unit, respectively the foot (2.5 Hz), syllable (5 Hz), or mora (10 Hz) level. The three language groups were also tested with a sequence in a non-native language, Polish, and a non-speech vocoded equivalent, to investigate possible differential speech/nonspeech processing. The results first showed that cortical tracking was most prominent at 5 Hz (syllable rate) for all three groups, but the French listeners showed enhanced tracking at 5 Hz compared to the English and the Japanese groups. Second, across groups, there were no differences in responses for speech versus non-speech at 5 Hz (syllable rate), but there was better tracking for speech than for non-speech at 10 Hz (not the syllable rate). Together these results provide evidence for both language-general and language-specific influences on cortical tracking.In Australia, this work was supported by a Transdisciplinary and Innovation Grant from Australian Research
Council Centre of Excellence in Dynamics of Language. In France, the work was funded by an ANR-DFG
grant (ANR-16-FRAL-0007) and funds from LABEX EFL (ANR-10-LABX-0083) to TN, and a DIM Cerveau et
PensƩes grant (2013 MOBIBRAIN). In Japan, the work was funded by JSPS grant-in-aid for Scientific Research S(16H06319) and for Specially Promoted Research (20H05617), MEXT grant for Innovative Areas #4903
(17H06382) to RM
Congenital Amusia (or Tone-Deafness) Interferes with Pitch Processing in Tone Languages
Congenital amusia is a neurogenetic disorder that affects music processing and that is ascribed to a deficit in pitch processing. We investigated whether this deficit extended to pitch processing in speech, notably the pitch changes used to contrast lexical tones in tonal languages. Congenital amusics and matched controls, all non-tonal language speakers, were tested for lexical tone discrimination in Mandarin Chinese (Experiment 1) and in Thai (Experiment 2). Tones were presented in pairs and participants were required to make same/different judgments. Experiment 2 additionally included musical analogs of Thai tones for comparison. Performance of congenital amusics was inferior to that of controls for all materials, suggesting a domain-general pitch-processing deficit. The pitch deficit of amusia is thus not limited to music, but may compromise the ability to process and learn tonal languages. Combined with acoustic analyses of the tone material, the present findings provide new insights into the nature of the pitch-processing deficit exhibited by amusics
Seeing a talking face matters: The relationship between cortical tracking of continuous auditory āvisual speech and gaze behaviour in infants, children and adults
Available online 15 April 2022An auditory-visual speech benefit, the benefit that visual speech cues bring to auditory speech perception, is experienced from early on in infancy and continues to be experienced to an increasing degree with age. While there is both behavioural and neurophysiological evidence for children and adults, only behavioural evidence exists for infants āas no neurophysiological study has provided a comprehensive examination of the auditory- visual speech benefit in infants. It is also surprising that most studies on auditory-visual speech benefit do not concurrently report looking behaviour especially since the auditory-visual speech benefit rests on the assumption that listeners attend to a speakerās talking face and that there are meaningful individual differences in looking behaviour. To address these gaps, we simultaneously recorded electroencephalographic (EEG) and eye-tracking data of 5-month-olds, 4-year-olds and adults as they were presented with a speaker in auditory-only (AO), visual- only (VO), and auditory-visual (AV) modes. Cortical tracking analyses that involved forward encoding models of the speech envelope revealed that there was an auditory-visual speech benefit [i.e., AV > ( A + V )], evident in 5-month-olds and adults but not 4-year-olds. Examination of cortical tracking accuracy in relation to looking behaviour, showed that infantsā relative attention to the speakerās mouth (vs. eyes) was positively correlated with cortical tracking accuracy of VO speech, whereas adultsā attention to the display overall was negatively correlated with cortical tracking accuracy of VO speech. This study provides the first neurophysiological evidence of auditory-visual speech benefit in infants and our results suggest ways in which current models of speech processing can be fine-tuned.This research was funded by a doctoral scholarship to the first author funded by the MARCS Institute at Western Sydney University and the HEARing Cooperative
Research Centre (CRC), and by HEARingCRC funding to the last author. The second authorās work is supported by the Basque Government through the BERC
2018ā2021 program, and PIBA PI-2019ā0054, and by the Spanish Ministry of Science and Innovation through the Ramon y Cajal Research Fellowship, PID2019ā
105528GA-I00
Depression and Anxiety in the Postnatal Period: An Examination of Infants' Home Language Environment, Vocalizations, and Expressive Language Abilities
Published 2020 August 3This longitudinal study investigated the effects of maternal emotional health concerns, on infants' home language environment, vocalization quantity, and expressive language skills. Mothers and their infants (at 6 and 12 months; 21 mothers with depression and or anxiety and 21 controls) provided day-long home-language recordings. Compared with controls, risk group recordings contained fewer mother-infant conversational turns and infant vocalizations, but daily number of adult word counts showed no group difference. Furthermore, conversational turns and infant vocalizations were stronger predictors of infants' 18-month vocabulary size than depression and anxiety measures. However, anxiety levels moderated the effect of conversational turns on vocabulary size. These results suggest that variability in mothers' emotional health influences infants' language environment and later language ability.18569924/University of Western Sydne
Maternal Depression Affects Infantsā Lexical Processing Abilities in the Second Year of Life
Published: 12 December 2020Maternal depression and anxiety have been proposed to increase the risk of adverse
outcomes of language development in the early years of life. This study investigated the e ects of
maternal depression and anxiety on language development using two approaches: (i) a categorical
approach that compared lexical abilities in two groups of children, a risk group (mothers with
clinical-level symptomatology) and a control non-risk group, and (ii) a continuous approach that
assessed the relation between individual mothersā clinical and subclinical symptomatology and their
infantsā lexical abilities. Infantsā lexical abilities were assessed at 18 months of age using an objective
lexical processing measure and a parental report of expressive vocabulary. Infants in the risk group
exhibited lower lexical processing abilities compared to controls, and maternal depression scores were
negatively correlated to infantsā lexical processing and vocabulary measures. Furthermore, maternal
depression (not anxiety) explained the variance in infantsā individual lexical processing performance
above the variance explained by their individual expressive vocabulary size. These results suggest
that significant di erences are emerging in 18-month-old infantsā lexical processing abilities, and this
appears to be related, in part, to their mothersā depression and anxiety symptomatology during the
postnatal period.This research was supported, in part, by an Australian Postgraduate Award PhD scholarship,
the MARCS Institute for Brain, Behaviour and a DevelopmentWriting Fellowship to the first author, as well as the ARC grant # FL130100014 to the sixth author. The second authorās work is supported by the Basque Government through the BERC 2018ā2021 program and by the Spanish Ministry of Science and Innovation through the Ramon y Cajal Research Fellowship, PID2019-105528GA-I00
- ā¦