99 research outputs found
The impact of CSF-filled cavities on scalp EEG and its implications
Previous studies have found electroencephalogram (EEG) amplitude and scalp topography differences between neurotypical and neurological/neurosurgical groups, being interpreted at the cognitive level. However, these comparisons are invariably accompanied by anatomical changes. Critical to EEG are the so-called volume currents, which are affected by the spatial distribution of the different tissues in the head. We investigated the effect of cerebrospinal fluid (CSF)-filled cavities on simulated EEG scalp data. We simulated EEG scalp potentials for known sources using different volume conduction models: a reference model (i.e., unlesioned brain) and models with realistic CSF-filled cavities gradually increasing in size. We used this approach for a single source close or far from the CSF-lesion cavity, and for a scenario with a distributed configuration of sources (i.e., a âcognitive event-related potential effectâ). The magnitude and topography errors between the reference and lesion models were quantified. For the single-source simulation close to the lesion, the CSF-filled lesion modulated signal amplitude with more than 17% magnitude error and topography with more than 9% topographical error. Negligible modulation was found for the single source far from the lesion. For the multisource simulations of the cognitive effect, the CSF-filled lesion modulated signal amplitude with more than 6% magnitude error and topography with more than 16% topography error in a nonmonotonic fashion. In conclusion, the impact of a CSF-filled cavity cannot be neglected for scalp-level EEG data. Especially when group-level comparisons are made, any scalp-level attenuated, aberrant, or absent effects are difficult to interpret without considering the confounding effect of CSF.</p
Cerebral coherence between communicators marks the emergence of meaning
How can we understand each other during communicative interactions? An influential suggestion holds that communicators are primed by each otherâs behaviors, with associative mechanisms automatically coordinating the production of communicative signals and the comprehension of their meanings. An alternative suggestion posits that mutual understanding requires shared conceptualizations of a signalâs use, i.e., âconceptual pactsâ that are abstracted away from specific experiences. Both accounts predict coherent neural dynamics across communicators, aligned either to the occurrence of a signal or to the dynamics of conceptual pacts. Using coherence spectral-density analysis of cerebral activity simultaneously measured in pairs of communicators, this study shows that establishing mutual understanding of novel signals synchronizes cerebral dynamics across communicatorsâ right temporal lobes. This interpersonal cerebral coherence occurred only within pairs with a shared communicative history, and at temporal scales independent from signalsâ occurrences. These findings favor the notion that meaning emerges from shared conceptualizations of a signalâs use
Familiarity modulates neural tracking of sung and spoken utterances
Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. However, when participant\u27s subjective ratings of perceived familiarity were used to group stimuli, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participantsâ neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context
Hippocampal-Prefrontal theta oscillations support memory integration
Integration of separate memories forms the basis of inferential reasoningâan essential cognitive process that enables complex behavior. Considerable evidence suggests that both hippocampus and medial prefrontal cortex (mPFC) play a crucial role in memory integration. Although previous studies indicate that theta oscillations facilitate memory processes, the electrophysiological mechanisms underlying memory integration remain elusive. To bridge this gap, we recorded magnetoencephalography data while participants performed an inference task and employed novel source reconstruction techniques to estimate oscillatory signals from the hippocampus. We found that hippocampal theta power during encoding predicts subsequent memory integration. Moreover, we observed increased theta coherence between hippocampus and mPFC. Our results suggest that integrated memory representations arise through hippocampal theta oscillations, possibly reflecting dynamic switching between encoding and retrieval states, and facilitating communication with mPFC. These findings have important implications for our understanding of memory-based decision making and knowledge acquisition
Comparison of undirected frequency-domain connectivity measures for cerebro-peripheral analysis
Analyses of cerebro-peripheral connectivity aim to quantify ongoing coupling between brain activity (measured by MEG/EEG) and peripheral signals such as muscle activity, continuous speech, or physiological rhythms (such as pupil dilation or respiration). Due to the distinct rhythmicity of these signals, undirected connectivity is typically assessed in the frequency domain. This leaves the investigator with two critical choices, namely a) the appropriate measure for spectral estimation (i.e., the transformation into the frequency domain) and b) the actual connectivity measure. As there is no consensus regarding best practice, a wide variety of methods has been applied. Here we systematically compare combinations of six standard spectral estimation methods (comprising fast Fourier and continuous wavelet transformation, bandpass filtering, and short-time Fourier transformation) and six connectivity measures (phase-locking value, Gaussian-Copula mutual information, Rayleigh test, weighted pairwise phase consistency, magnitude squared coherence, and entropy). We provide performance measures of each combination for simulated data (with precise control over true connectivity), a single-subject set of real MEG data, and a full group analysis of real MEG data. Our results show that, overall, WPPC and GCMI tend to outperform other connectivity measures, while entropy was the only measure sensitive to bimodal deviations from a uniform phase distribution. For group analysis, choosing the appropriate spectral estimation method appears to be more critical than the connectivity measure. We discuss practical implications (sampling rate, SNR, computation time, and data length) and aim to provide recommendations tailored to particular research questions
Issues and recommendations from the OHBM COBIDAS MEEG committee for reproducible EEG and MEG research
The Organization for Human Brain Mapping (OHBM) has been active in advocating for the instantiation of best practices in neuroimaging data acquisition, analysis, reporting and sharing of both data and analysis code to deal with issues in science related to reproducibility and replicability. Here we summarize recommendations for such practices in magnetoencephalographic (MEG) and electroencephalographic (EEG) research, recently developed by the OHBM neuroimaging community known by the abbreviated name of COBIDAS MEEG. We discuss the rationale for the guidelines and their general content, which encompass many topics under active discussion in the field. We highlight future opportunities and challenges to maximizing the sharing and exploitation of MEG and EEG data, and we also discuss how this âlivingâ set of guidelines will evolve to continually address new developments in neurophysiological assessment methods and multimodal integration of neurophysiological data with other data types.Peer reviewe
Supramodal sentence processing in the human brain: fMRI evidence for the influence of syntactic complexity in more than 200 participants
This study investigated two questions. One is: To what degree is sentence processing beyond single words independent of the input modality (speech vs. reading)? The second question is: Which parts of the network recruited by both modalities is sensitive to syntactic complexity? These questions were investigated by having more than 200 participants read or listen to well-formed sentences or series of unconnected words. A largely left-hemisphere frontotemporoparietal network was found to be supramodal in nature, i.e., independent of input modality. In addition, the left inferior frontal gyrus (LIFG) and the left posterior middle temporal gyrus (LpMTG) were most clearly associated with left-branching complexity. The left anterior temporal lobe (LaTL) showed the greatest sensitivity to sentences that differed in right-branching complexity. Moreover, activity in LIFG and LpMTG increased from sentence onset to end, in parallel with an increase of the left-branching complexity. While LIFG, bilateral anterior temporal lobe, posterior MTG, and left inferior parietal lobe (LIPL) all contribute to the supramodal unification processes, the results suggest that these regions differ in their respective contributions to syntactic complexity related processing. The consequences of these findings for neurobiological models of language processing are discussed
Comparison of beamformer implementations for MEG source localization
Beamformers are applied for estimating spatiotemporal characteristics of neuronal sources underlying measured MEG/EEG signals. Several MEG analysis toolboxes include an implementation of a linearly constrained minimum-variance (LCMV) beamformer. However, differences in implementations and in their results complicate the selection and application of beamformers and may hinder their wider adoption in research and clinical use. Additionally, combinations of different MEG sensor types (such as magnetometers and planar gradiometers) and application of preprocessing methods for interference suppression, such as signal space separation (SSS), can affect the results in different ways for different implementations. So far, a systematic evaluation of the different implementations has not been performed. Here, we compared the localization performance of the LCMV beamformer pipelines in four widely used open-source toolboxes (MNE-Python, FieldTrip, DAiSS (SPM12), and Brainstorm) using datasets both with and without SSS interference suppression. We analyzed MEG data that were i) simulated, ii) recorded from a static and moving phantom, and iii) recorded from a healthy volunteer receiving auditory, visual, and somatosensory stimulation. We also investigated the effects of SSS and the combination of the magnetometer and gradiometer signals. We quantified how localization error and point-spread volume vary with the signal-to-noise ratio (SNR) in all four toolboxes. When applied carefully to MEG data with a typical SNR (3-15 dB), all four toolboxes localized the sources reliably; however, they differed in their sensitivity to preprocessing parameters. As expected, localizations were highly unreliable at very low SNR, but we found high localization error also at very high SNRs for the first three toolboxes while Brainstorm showed greater robustness but with lower spatial resolution. We also found that the SNR improvement offered by SSS led to more accurate localization.Peer reviewe
Ghost interactions in MEG/EEG source space : A note of caution on inter-areal coupling measures
When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed. Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here, however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large numbers of spurious false positive connections through field spread in the vicinity of true interactions. This fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most importantly, beyond defining and illustrating the problem of spurious, or "ghost" interactions, we provide a rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when using measures that are immune to zero-lag correlations.Peer reviewe
Supramodal Sentence Processing in the Human Brain: fMRI Evidence for the Influence of Syntactic Complexity in More Than 200 Participants
This study investigated two questions. One is: To what degree is sentence processing beyond single words independent of the input modality (speech vs. reading)? The second question is: Which parts of the network recruited by both modalities is sensitive to syntactic complexity? These questions were investigated by having more than 200 participants read or listen to well-formed sentences or series of unconnected words. A largely left-hemisphere frontotemporoparietal network was found to be supramodal in nature, i.e., independent of input modality. In addition, the left inferior frontal gyrus (LIFG) and the left posterior middle temporal gyrus (LpMTG) were most clearly associated with left-branching complexity. The left anterior temporal lobe showed the greatest sensitivity to sentences that differed in right-branching complexity. Moreover, activity in LIFG and LpMTG increased from sentence onset to end, in parallel with an increase of the left-branching complexity. While LIFG, bilateral anterior temporal lobe, posterior MTG, and left inferior parietal lobe all contribute to the supramodal unification processes, the results suggest that these regions differ in their respective contributions to syntactic complexity related processing. The consequences of these findings for neurobiological models of language processing are discussed
- âŠ