38 research outputs found
EEG potentials associated with artificial grammar learning in the primate brain
AbstractElectroencephalography (EEG) has identified human brain potentials elicited by Artificial Grammar (AG) learning paradigms, which present participants with rule-based sequences of stimuli. Nonhuman animals are sensitive to certain AGs; therefore, evaluating which EEG Event Related Potentials (ERPs) are associated with AG learning in nonhuman animals could identify evolutionarily conserved processes. We recorded EEG potentials during an auditory AG learning experiment in two Rhesus macaques. The animals were first exposed to sequences of nonsense words generated by the AG. Then surface-based ERPs were recorded in response to sequences that were âconsistentâ with the AG and âviolationâ sequences containing illegal transitions. The AG violations strongly modulated an early component, potentially homologous to the Mismatch Negativity (mMMN), a P200 and a late frontal positivity (P500). The macaque P500 is similar in polarity and time of occurrence to a late EEG positivity reported in human AG learning studies but might differ in functional role
Emergence of the cortical encoding of phonetic features in the first year of life.
Even prior to producing their first words, infants are developing a sophisticated speech processing system, with robust word recognition present by 4-6 months of age. These emergent linguistic skills, observed with behavioural investigations, are likely to rely on increasingly sophisticated neural underpinnings. The infant brain is known to robustly track the speech envelope, however previous cortical tracking studies were unable to demonstrate the presence of phonetic feature encoding. Here we utilise temporal response functions computed from electrophysiological responses to nursery rhymes to investigate the cortical encoding of phonetic features in a longitudinal cohort of infants when aged 4, 7 and 11 months, as well as adults. The analyses reveal an increasingly detailed and acoustically invariant phonetic encoding emerging over the first year of life, providing neurophysiological evidence that the pre-verbal human cortex learns phonetic categories. By contrast, we found no credible evidence for age-related increases in cortical tracking of the acoustic spectrogram
Decoding speech information from EEG data with 4-, 7- and 11-month-old infants: Using convolutional neural network, mutual information-based and backward linear models.
BackgroundComputational models that successfully decode neural activity into speech are increasing in the adult literature, with convolutional neural networks (CNNs), backward linear models, and mutual information (MI) models all being applied to neural data in relation to speech input. This is not the case in the infant literature.New methodThree different computational models, two novel for infants, were applied to decode low-frequency speech envelope information. Previously-employed backward linear models were compared to novel CNN and MI-based models. Fifty infants provided EEG recordings when aged 4, 7, and 11 months, while listening passively to natural speech (sung or chanted nursery rhymes) presented by video with a female singer.ResultsEach model computed speech information for these nursery rhymes in two different low-frequency bands, delta and theta, thought to provide different types of linguistic information. All three models demonstrated significant levels of performance for delta-band neural activity from 4 months of age, with two of three models also showing significant performance for theta-band activity. All models also demonstrated higher accuracy for the delta-band neural responses. None of the models showed developmental (age-related) effects.Comparisons with existing methodsThe data demonstrate that the choice of algorithm used to decode speech envelope information from neural activity in the infant brain determines the developmental conclusions that can be drawn.ConclusionsThe modelling shows that better understanding of the strengths and weaknesses of each modelling approach is fundamental to improving our understanding of how the human brain builds a language system
How bilingualism modulates selective attention in children
AbstractThere is substantial evidence that learning and using multiple languages modulates selective attention in children. The current study investigated the mechanisms that drive this modification. Specifically, we asked whether the need for constant management of competing languages in bilinguals increases attentional capacity, or draws on the available resources such that they need to be economised to support optimal task performance. Monolingual and bilingual children aged 7â12 attended to a narrative presented in one ear, while ignoring different types of interference in the other ear. We used EEG to capture the neural encoding of attended and unattended speech envelopes, and assess how well they can be reconstructed from the responses of the neuronal populations that encode them. Despite equivalent behavioral performance, monolingual and bilingual children encoded attended speech differently, with the pattern of encoding across conditions in bilinguals suggesting a redistribution of the available attentional capacity, rather than its enhancement.</jats:p
Recommended from our members
Emergence of the cortical encoding of phonetic features in the first year of life.
Acknowledgements: We thank Dimitris Panayiotou, Alessia Philips, Natasha Mead, Helen Olawole-Scott, Panagiotis Boutris, Samuel Gibbon, Isabel Williams, Sheila Flanagan, and Christina Grey who helped collect the data as well as all the families of the infant participants. We thank Dr. Susan Richards for her assistance on the phoneme transcription. We thank the CogHear workshop organisers (Mounya Elhilali, Malcolm Slaney, and Shihab Shamma) and participants for their useful feedback on the early results of this study. This project received funding from the European Research Council (ERC) under the European Unionâs Horizon 2020 research and innovation programme (Grant Agreement No. 694786 to U.G.) (A.A., A.N.C., S.R., P.B., U.G.). This research was conducted with the financial support of Science Foundation Ireland under Grant Agreement No. 13/RC/2106_P2 at the ADAPT SFI Research Centre at Trinity College Dublin (G.D.L., G.C.). ADAPT, the SFI Research Centre for AI-Driven Digital Content Technology, is funded by Science Foundation Ireland through the SFI Research Centres Programme. This work was also supported by the Science Foundation Ireland Career Development Award 15/CDA/3316 (G.D.L., R.R.). G.C. was supported by an Advanced European Research Council grant (NEUME, 787836) and by the FrontCog grant ANR-17-EURE-0017.Funder: FrontCog (ANR-17-EURE-0017)Even prior to producing their first words, infants are developing a sophisticated speech processing system, with robust word recognition present by 4-6 months of age. These emergent linguistic skills, observed with behavioural investigations, are likely to rely on increasingly sophisticated neural underpinnings. The infant brain is known to robustly track the speech envelope, however previous cortical tracking studies were unable to demonstrate the presence of phonetic feature encoding. Here we utilise temporal response functions computed from electrophysiological responses to nursery rhymes to investigate the cortical encoding of phonetic features in a longitudinal cohort of infants when aged 4, 7 and 11 months, as well as adults. The analyses reveal an increasingly detailed and acoustically invariant phonetic encoding emerging over the first year of life, providing neurophysiological evidence that the pre-verbal human cortex learns phonetic categories. By contrast, we found no credible evidence for age-related increases in cortical tracking of the acoustic spectrogram
Recommended from our members
Cortical Oscillations in Pre-verbal Infants Track Rhythmic Speech and Non-speech Stimuli
The foundations for language acquisition are laid in infancy. A key feature of infant-directed speech (IDS) is that the slowest modulations of its amplitude envelope (~2 Hz) contain more energy than in adult-directed speech. These slow modulations may provide a cross-language rhythmic scaffold for the neural tracking of speech in infancy. To investigate relations between early neural processing of speech and language acquisition in English, the BabyRhythm project followed 113 infants during infancy and toddlerhood. The neural predictor of language development reported here was the cortical tracking of slow, rhythmic audiovisual stimuli, processing of which is known to differ in older children with dyslexia. To find out how such stimuli are tracked early in development, infants were presented with videos of a woman repeating the syllable âTaâ twice per second, and a ball bouncing on a drum to create a 2Hz beat. At the ages of six and nine months, infants exhibited a significant peak in EEG power at 2Hz when listening to these stimuli, indicating that the infant brain was responding to these stimuli at the expected frequency. Time-frequency analysis showed increased inter-trial EEG phase coherence at 2Hz, suggesting that the increase in oscillatory power was driven by the stimuli. There were no differences in how the speech and non-speech stimuli were tracked. These results indicate that the infant brain can track the rhythm of slow auditory stimuli. They lay the foundation for future investigation of how individual differences in tracking might relate to later language acquisition.</p