38 research outputs found

    EEG potentials associated with artificial grammar learning in the primate brain

    Get PDF
    AbstractElectroencephalography (EEG) has identified human brain potentials elicited by Artificial Grammar (AG) learning paradigms, which present participants with rule-based sequences of stimuli. Nonhuman animals are sensitive to certain AGs; therefore, evaluating which EEG Event Related Potentials (ERPs) are associated with AG learning in nonhuman animals could identify evolutionarily conserved processes. We recorded EEG potentials during an auditory AG learning experiment in two Rhesus macaques. The animals were first exposed to sequences of nonsense words generated by the AG. Then surface-based ERPs were recorded in response to sequences that were ‘consistent’ with the AG and ‘violation’ sequences containing illegal transitions. The AG violations strongly modulated an early component, potentially homologous to the Mismatch Negativity (mMMN), a P200 and a late frontal positivity (P500). The macaque P500 is similar in polarity and time of occurrence to a late EEG positivity reported in human AG learning studies but might differ in functional role

    Emergence of the cortical encoding of phonetic features in the first year of life.

    Get PDF
    Even prior to producing their first words, infants are developing a sophisticated speech processing system, with robust word recognition present by 4-6 months of age. These emergent linguistic skills, observed with behavioural investigations, are likely to rely on increasingly sophisticated neural underpinnings. The infant brain is known to robustly track the speech envelope, however previous cortical tracking studies were unable to demonstrate the presence of phonetic feature encoding. Here we utilise temporal response functions computed from electrophysiological responses to nursery rhymes to investigate the cortical encoding of phonetic features in a longitudinal cohort of infants when aged 4, 7 and 11 months, as well as adults. The analyses reveal an increasingly detailed and acoustically invariant phonetic encoding emerging over the first year of life, providing neurophysiological evidence that the pre-verbal human cortex learns phonetic categories. By contrast, we found no credible evidence for age-related increases in cortical tracking of the acoustic spectrogram

    Decoding speech information from EEG data with 4-, 7- and 11-month-old infants: Using convolutional neural network, mutual information-based and backward linear models.

    Get PDF
    BackgroundComputational models that successfully decode neural activity into speech are increasing in the adult literature, with convolutional neural networks (CNNs), backward linear models, and mutual information (MI) models all being applied to neural data in relation to speech input. This is not the case in the infant literature.New methodThree different computational models, two novel for infants, were applied to decode low-frequency speech envelope information. Previously-employed backward linear models were compared to novel CNN and MI-based models. Fifty infants provided EEG recordings when aged 4, 7, and 11 months, while listening passively to natural speech (sung or chanted nursery rhymes) presented by video with a female singer.ResultsEach model computed speech information for these nursery rhymes in two different low-frequency bands, delta and theta, thought to provide different types of linguistic information. All three models demonstrated significant levels of performance for delta-band neural activity from 4 months of age, with two of three models also showing significant performance for theta-band activity. All models also demonstrated higher accuracy for the delta-band neural responses. None of the models showed developmental (age-related) effects.Comparisons with existing methodsThe data demonstrate that the choice of algorithm used to decode speech envelope information from neural activity in the infant brain determines the developmental conclusions that can be drawn.ConclusionsThe modelling shows that better understanding of the strengths and weaknesses of each modelling approach is fundamental to improving our understanding of how the human brain builds a language system

    How bilingualism modulates selective attention in children

    No full text
    AbstractThere is substantial evidence that learning and using multiple languages modulates selective attention in children. The current study investigated the mechanisms that drive this modification. Specifically, we asked whether the need for constant management of competing languages in bilinguals increases attentional capacity, or draws on the available resources such that they need to be economised to support optimal task performance. Monolingual and bilingual children aged 7–12 attended to a narrative presented in one ear, while ignoring different types of interference in the other ear. We used EEG to capture the neural encoding of attended and unattended speech envelopes, and assess how well they can be reconstructed from the responses of the neuronal populations that encode them. Despite equivalent behavioral performance, monolingual and bilingual children encoded attended speech differently, with the pattern of encoding across conditions in bilinguals suggesting a redistribution of the available attentional capacity, rather than its enhancement.</jats:p

    EEG data

    No full text
    Raw and preprocessed EEG data used in the analysi
    corecore