1,209 research outputs found

    Item parameters dissociate between expectation formats: a regression analysis of time-frequency decomposed EEG data

    Get PDF
    During language comprehension, semantic contextual information is used to generate expectations about upcoming items. This has been commonly studied through the N400 event-related potential (ERP), as a measure of facilitated lexical retrieval. However, the associative relationships in multi-word expressions (MWE) may enable the generation of a categorical expectation, leading to lexical retrieval before target word onset. Processing of the target word would thus reflect a target-identification mechanism, possibly indexed by a P3 ERP component. However, given their time overlap (200–500 ms post-stimulus onset), differentiating between N400/P3 ERP responses (averaged over multiple linguistically variable trials) is problematic. In the present study, we analyzed EEG data from a previous experiment, which compared ERP responses to highly expected words that were placed either in a MWE or a regular non-fixed compositional context, and to low predictability controls. We focused on oscillatory dynamics and regression analyses, in order to dissociate between the two contexts by modeling the electrophysiological response as a function of item-level parameters. A significant interaction between word position and condition was found in the regression model for power in a theta range (~7–9 Hz), providing evidence for the presence of qualitative differences between conditions. Power levels within this band were lower for MWE than compositional contexts when the target word appeared later on in the sentence, confirming that in the former lexical retrieval would have taken place before word onset. On the other hand, gamma-power (~50–70 Hz) was also modulated by predictability of the item in all conditions, which is interpreted as an index of a similar “matching” sub-step for both types of contexts, binding an expected representation and the external input

    Analysis of time-varying synchronization of EEG during sentences identification

    Get PDF
    The study of the synchronization of EEG signals can help us to understand the underlying cognitive processes and detect the learning deficiencies since the oscillatory states in the EEG reveal the rhythmic synchronous activity in large networks of neurons. As the changes of the physiological states and the relative environment exist when cognitive and information processing take place in different brain regions at different times, the practical EEGs therefore turn out to be extremely non-stationary processes. To investigate how these distributed brain regions are linked together and the information is exchanged with time, this paper proposes a modern time-frequency coherent analysis method that employs an alternative way for quantifying synchronization with both temporal and spatial resolution. Wavelet coherent spectrum is defined such that the degree of synchronization and information flow between different brain regions can be described. Several real EEG data are analysed under the cognitive tasks of sentences identification in both English and Chinese. The time-varying synchronization between the brain regions involved in the processing of sentences exhibited that a common neural network is activated by both English and Chinese sentences. The results of the presented method are helpful for studying English and Chinese learning for Chinese students.published_or_final_versio

    Detecting event-related recurrences by symbolic analysis: Applications to human language processing

    Get PDF
    Quasistationarity is ubiquitous in complex dynamical systems. In brain dynamics there is ample evidence that event-related potentials reflect such quasistationary states. In order to detect them from time series, several segmentation techniques have been proposed. In this study we elaborate a recent approach for detecting quasistationary states as recurrence domains by means of recurrence analysis and subsequent symbolisation methods. As a result, recurrence domains are obtained as partition cells that can be further aligned and unified for different realisations. We address two pertinent problems of contemporary recurrence analysis and present possible solutions for them.Comment: 24 pages, 6 figures. Draft version to appear in Proc Royal Soc

    The Contribution of Sound Intensity in Vocal Emotion Perception: Behavioral and Electrophysiological Evidence

    Get PDF
    Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies

    Situated Sentence Processing: The Coordinated Interplay Account and a Neurobehavioral Model

    Get PDF
    Crocker MW, Knoeferle P, Mayberry M. Situated Sentence Processing: The Coordinated Interplay Account and a Neurobehavioral Model. Brain and Language. 2010;112(3):189-201

    Selective attention and speech processing in the cortex

    Full text link
    In noisy and complex environments, human listeners must segregate the mixture of sound sources arriving at their ears and selectively attend a single source, thereby solving a computationally difficult problem called the cocktail party problem. However, the neural mechanisms underlying these computations are still largely a mystery. Oscillatory synchronization of neuronal activity between cortical areas is thought to provide a crucial role in facilitating information transmission between spatially separated populations of neurons, enabling the formation of functional networks. In this thesis, we seek to analyze and model the functional neuronal networks underlying attention to speech stimuli and find that the Frontal Eye Fields play a central 'hub' role in the auditory spatial attention network in a cocktail party experiment. We use magnetoencephalography (MEG) to measure neural signals with high temporal precision, while sampling from the whole cortex. However, several methodological issues arise when undertaking functional connectivity analysis with MEG data. Specifically, volume conduction of electrical and magnetic fields in the brain complicates interpretation of results. We compare several approaches through simulations, and analyze the trade-offs among various measures of neural phase-locking in the presence of volume conduction. We use these insights to study functional networks in a cocktail party experiment. We then construct a linear dynamical system model of neural responses to ongoing speech. Using this model, we are able to correctly predict which of two speakers is being attended by a listener. We then apply this model to data from a task where people were attending to stories with synchronous and scrambled videos of the speakers' faces to explore how the presence of visual information modifies the underlying neuronal mechanisms of speech perception. This model allows us to probe neural processes as subjects listen to long stimuli, without the need for a trial-based experimental design. We model the neural activity with latent states, and model the neural noise spectrum and functional connectivity with multivariate autoregressive dynamics, along with impulse responses for external stimulus processing. We also develop a new regularized Expectation-Maximization (EM) algorithm to fit this model to electroencephalography (EEG) data

    Advances in the neurocognition of music and language

    Get PDF

    Listening to limericks: a pupillometry investigation of perceivers’ expectancy

    Get PDF
    What features of a poem make it captivating, and which cognitive mechanisms are sensitive to these features? We addressed these questions experimentally by measuring pupillary responses of 40 participants who listened to a series of Limericks. The Limericks ended with either a semantic, syntactic, rhyme or metric violation. Compared to a control condition without violations, only the rhyme violation condition induced a reliable pupillary response. An anomaly-rating study on the same stimuli showed that all violations were reliably detectable relative to the control condition, but the anomaly induced by rhyme violations was perceived as most severe. Together, our data suggest that rhyme violations in Limericks may induce an emotional response beyond mere anomaly detection
    corecore