51 research outputs found

    Interpretations of frequency domain analyses of neural entrainment: Periodicity, fundamental frequency, and harmonics

    Get PDF
    Brain activity can follow the rhythms of dynamic sensory stimuli, such as speech and music, a phenomenon called neural entrainment. It has been hypothesized that low-frequency neural entrainment in the neural delta and theta bands provides a potential mechanism to represent and integrate temporal information. Low-frequency neural entrainment is often studied using periodically changing stimuli and is analyzed in the frequency domain using the Fourier analysis. The Fourier analysis decomposes a periodic signal into harmonically related sinusoids. However, it is not intuitive how these harmonically related components are related to the response waveform. Here, we explain the interpretation of response harmonics, with a special focus on very low frequency neural entrainment near 1 Hz. It is illustrated why neural responses repeating at f Hz do not necessarily generate any neural response at f Hz in the Fourier spectrum. A strong neural response at f Hz indicates that the time scales of the neural response waveform within each cycle match the time scales of the stimulus rhythm. Therefore, neural entrainment at very low frequency implies not only that the neural response repeats at f Hz but also that each period of the neural response is a slow wave matching the time scale of a f Hz sinusoid. With a few exceptions, the literature on face recognition and its neural basis derives from the presentation of single faces. However, in many ecologically typical situations, we see more than one face, in different communicative contexts. One of the principal ways in which we interact using our faces is kissing. Although there is no obvious taxonomy of kissing, we kiss in various interpersonal situations (greeting, ceremony, sex), with different goals and partners. Here, we assess the visual cortical responses elicited by viewing different couples kissing with different intents. The study thus lies at the nexus of face recognition, action recognition, and social neuroscience. Magnetoencephalography data were recorded from nine participants in a passive viewing paradigm. We presented images of couples kissing, with the images differing along two dimensions, kiss type and couple type. We quantified event-related field amplitudes and latencies. In each participant, the canonical sequence of event-related fields was observed, including an M100, an M170, and a later M400 response. The earliest two responses were significantly modulated in latency (M100) or amplitude (M170) by the sex composition of the images (with male-male and female-female pairings yielding faster latency M100 and larger amplitude M170 responses). In contrast, kiss type showed no modulation of any brain response. The early cortical-evoked fields that we typically associate with the presentation and analysis of single faces are differentially sensitive to complex social and action information in face pairs that are kissing. The early responses, typically associated with perceptual analysis, exhibit a consistent grouping and suggest a high and rapid sensitivity to the composition of the kissing pairs

    Oscillatory activity and EEG phase synchrony of concurrent word segmentation and meaning-mapping in 9-year-old children

    Get PDF
    When learning a new language, one must segment words from continuous speech and associate them with meanings. These complex processes can be boosted by attentional mechanisms triggered by multi-sensory information. Previous electrophysiological studies suggest that brain oscillations are sensitive to different hierarchical complexity levels of the input, making them a plausible neural substrate for speech parsing. Here, we investigated the functional role of brain oscillations during concurrent speech segmentation and meaning acquisition in sixty 9-year-old children. We collected EEG data during an audio-visual statistical learning task during which children were exposed to a learning condition with consistent word-picture associations and a random condition with inconsistent word-picture associations before being tested on their ability to recall words and word-picture associations. We capitalized on the brain dynamics to align neural activity to the same rate as an external rhythmic stimulus to explore modulations of neural synchronization and phase synchronization between electrodes during multi-sensory word learning. Results showed enhanced power at both word- and syllabic-rate and increased EEG phase synchronization between frontal and occipital regions in the learning compared to the random condition. These findings suggest that multi-sensory cueing and attentional mechanisms play an essential role in children's successful word learning

    Electrophysiology of statistical learning: Exploring the online learning process and offline learning product

    Get PDF
    First published: 24 December 2019A continuous stream of syllables is segmented into discrete constituents based on the transitional probabilities (TPs) between adjacent syllables by means of statistical learning. However, we still do not know whether people attend to high TPs between frequently co-occurring syllables and cluster them together as parts of the discrete constituents or attend to low TPs aligned with the edges between the constituents and extract them as whole units. Earlier studies on TP-based segmentation also have not distinguished between the segmentation process (how people segment continuous speech) and the learning product (what is learnt by means of statistical learning mechanisms). In the current study, we explored the learning outcome separately from the learning process, focusing on three possible learning products: holistic constituents that are retrieved from memory during the recognition test, clusters of frequently co-occurring syllables, or a set of statistical regularities which can be used to reconstruct legitimate candidates for discrete constituents during the recognition test. Our data suggest that people employ boundary-finding mechanisms during online segmentation by attending to low inter-syllabic TPs during familiarization and also identify potential candidates for discrete constituents based on their statistical congruency with rules extracted during the learning process. Memory representations of recurrent constituents embedded in the continuous speech stream during familiarization facilitate subsequent recognition of these discrete constituents.Secretaría de Estado de Investigación, Desarrollo e Innovación, Grant/Award Number: RTI2018-098317-B-I00 ; H2020 Marie Skłodowska-Curie Actions, Grant/Award Number: DLV-792331; Ekonomiaren Garapen eta Lehiakortasun Saila, Eusko Jaurlaritza, Grant/Award Number: PI-2017-25; Spanish Ministry of Economy and Competitiveness (MINECO) through the “Severo Ochoa” Programme for Centres/Units of Excellence in R&D, Grant/Award Number: SEV-2015-490; Basque Government, Grant/Award Number: PI-2017-25; European Commission through the Marie Skłodowska-Curie Research Fellowshi

    Online neural monitoring of statistical learning.

    Get PDF
    The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the reaction time (RT) task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning

    Efficient Low-Frequency SSVEP Detection with Wearable EEG Using Normalized Canonical Correlation Analysis

    Get PDF
    Recent studies show that the integrity of core perceptual and cognitive functions may be tested in a short time with Steady-State Visual Evoked Potentials (SSVEP) with low stimulation frequencies, between 1 and 10 Hz. Wearable EEG systems provide unique opportunities to test these brain functions on diverse populations in out-of-the-lab conditions. However, they also pose significant challenges as the number of EEG channels is typically limited, and the recording conditions might induce high noise levels, particularly for low frequencies. Here we tested the performance of Normalized Canonical Correlation Analysis (NCCA), a frequency-normalized version of CCA, to quantify SSVEP from wearable EEG data with stimulation frequencies ranging from 1 to 10 Hz. We validated NCCA on data collected with an 8-channel wearable wireless EEG system based on BioWolf, a compact, ultra-light, ultra-low-power recording platform. The results show that NCCA correctly and rapidly detects SSVEP at the stimulation frequency within a few cycles of stimulation, even at the lowest frequency (4 s recordings are sufficient for a stimulation frequency of 1 Hz), outperforming a state-of-the-art normalized power spectral measure. Importantly, no preliminary artifact correction or channel selection was required. Potential applications of these results to research and clinical studies are discussed

    Characterizing Neural Entrainment to Hierarchical Linguistic Units using Electroencephalography (EEG)

    Get PDF
    To understand speech, listeners have to combine the words they hear into phrases and sentences. Recent magnetoencephalography (MEG) and electrocorticography (ECoG) studies show that cortical activity is concurrently entrained/synchronized to the rhythms of multiple levels of linguistic units including words, phrases, and sentences. Here we investigate whether this phenomenon can be observed using electroencephalography (EEG), a technique that is more widely available than MEG and ECoG. We show that the EEG responses concurrently track the rhythms of hierarchical linguistic units such as syllables/words, phrases, and sentences. The strength of the sentential-rate response correlates with how well each subject can detect random words embedded in a sequence of sentences. In contrast, only a syllabic-rate response is observed for an unintelligible control stimulus. In sum, EEG provides a useful tool to characterize neural encoding of hierarchical linguistic units, potentially even in individual participants

    Brain dynamics sustaining rapid rule extraction from speech

    Full text link
    Language acquisition is a complex process that requires the synergic involvement of different cognitive functions, which include extracting and storing the words of the language and their embedded rules for progressive acquisition of grammatical information. As has been shown in other fields that study learning processes, synchronization mechanisms between neuronal assemblies might have a key role during language learning. In particular, studying these dynamics may help uncover whether different oscillatory patterns sustain more item-based learning of words and rule-based learning from speech input. Therefore, we tracked the modulation of oscillatory neural activity during the initial exposure to an artificial language, which contained embedded rules. We analyzed both spectral power variations, as a measure of local neuronal ensemble synchronization, as well as phase coherence patterns, as an index of the long-range coordination of these local groups of neurons. Synchronized activity in the gamma band (2040 Hz), previously reported to be related to the engagement of selective attention, showed a clear dissociation of local power and phase coherence between distant regions. In this frequency range, local synchrony characterized the subjects who were focused on word identification and was accompanied by increased coherence in the theta band (48 Hz). Only those subjects who were able to learn the embedded rules showed increased gamma band phase coherence between frontal, temporal, and parietal regions

    No statistical learning advantage in children over adults: Evidence from behaviour and neural entrainment.

    Get PDF
    Explicit recognition measures of statistical learning (SL) suggest that children and adults have similar linguistic SL abilities. However, explicit tasks recruit additional cognitive processes that are not directly relevant for SL and may thus underestimate children\u27s true SL capacities. In contrast, implicit tasks and neural measures of SL should be less influenced by explicit, higher-level cognitive abilities and thus may be better suited to capturing developmental differences in SL. Here, we assessed SL to six minutes of an artificial language in English-speaking children (n = 56, 24 females, M = 9.98 years) and adults (n = 44; 31 females, M = 22.97 years), using explicit and implicit behavioural measures and an EEG measure of neural entrainment. With few exceptions, children and adults showed largely similar performance on the behavioural explicit and implicit tasks, replicating prior work. Children and adults also demonstrated robust neural entrainment to both words and syllables, with a similar time course of word-level entrainment, reflecting learning of the hidden word structure. These results demonstrate that children and adults have similar linguistic SL abilities, even when learning is assessed through implicit performance-based and neural measures

    Acoustically driven cortical delta oscillations underpin prosodic chunking

    Get PDF
    Oscillation-based models of speech perception postulate a cortical computational principle by which decoding is performed within a window structure derived by a segmentation process. Segmentation of syllable-size chunks is realized by a theta oscillator. We provide evidence for an analogous role of a delta oscillator in the segmentation of phrase-sized chunks. We recorded Magnetoencephalography (MEG) in humans, while participants performed a target identification task. Random-digit strings, with phrase-long chunks of two digits, were presented at chunk rates of 1.8 Hz or 2.6 Hz, inside or outside the delta frequency band (defined here to be 0.5 - 2 Hz). Strong periodicities were elicited by chunk rates inside of delta in superior, middle temporal areas and speech-motor integration areas. Periodicities were diminished or absent for chunk rates outside delta, in line with behavioral performance. Our findings show that prosodic chunking of phrase-sized acoustic segments is correlated with acoustic-driven delta oscillations, expressing anatomically specific patterns of neuronal periodicities

    Rhythmically modulating neural entrainment during exposure to regularities influences statistical learning

    Get PDF
    The ability to discover regularities in the environment, such as syllable patterns in speech, is known as statistical learning. Previous studies have shown that statistical learning is accompanied by neural entrainment, in which neural activity temporally aligns with repeating patterns over time. However, it is unclear whether these rhythmic neural dynamics play a functional role in statistical learning, or whether they largely reflect the downstream consequences of learning, such as the enhanced perception of learned words in speech. To better understand this issue, we manipulated participants’ neural entrainment during statistical learning using continuous rhythmic visual stimulation. Participants were exposed to a speech stream of repeating nonsense words while viewing either (1) a visual stimulus with a “congruent” rhythm that aligned with the word structure, (2) a visual stimulus with an incongruent rhythm, or (3) a static visual stimulus. Statistical learning was subsequently measured using both an explicit and implicit test. Participants in the congruent condition showed a significant increase in neural entrainment over auditory regions at the relevant word frequency, over and above effects of passive volume conduction, indicating that visual stimulation successfully altered neural entrainment within relevant neural substrates. Critically, during the subsequent implicit test, participants in the congruent condition showed an enhanced ability to predict upcoming syllables and stronger neural phase synchronization to component words, suggesting that they had gained greater sensitivity to the statistical structure of the speech stream relative to the incongruent and static groups. This learning benefit could not be attributed to strategic processes, as participants were largely unaware of the contingencies between the visual stimulation and embedded words. These results indicate that manipulating neural entrainment during exposure to regularities influences statistical learning outcomes, suggesting that neural entrainment may functionally contribute to statistical learning. Our findings encourage future studies using non-invasive brain stimulation methods to further understand the role of entrainment in statistical learning
    • …
    corecore