185 research outputs found

    Divergent Cortical Generators of MEG and EEG during Human Sleep Spindles Suggested by Distributed Source Modeling

    Get PDF
    Background: Sleep spindles are,1-second bursts of 10–15 Hz activity, occurring during normal stage 2 sleep. In animals, sleep spindles can be synchronous across multiple cortical and thalamic locations, suggesting a distributed stable phaselocked generating system. The high synchrony of spindles across scalp EEG sites suggests that this may also be true in humans. However, prior MEG studies suggest multiple and varying generators. Methodology/Principal Findings: We recorded 306 channels of MEG simultaneously with 60 channels of EEG during naturally occurring spindles of stage 2 sleep in 7 healthy subjects. High-resolution structural MRI was obtained in each subject, to define the shells for a boundary element forward solution and to reconstruct the cortex providing the solution space for a noise-normalized minimum norm source estimation procedure. Integrated across the entire duration of all spindles, sources estimated from EEG and MEG are similar, diffuse and widespread, including all lobes from both hemispheres. However, the locations, phase and amplitude of sources simultaneously estimated from MEG versus EEG are highly distinct during the same spindles. Specifically, the sources estimated from EEG are highly synchronous across the cortex, whereas those from MEG rapidly shift in phase, hemisphere, and the location within the hemisphere. Conclusions/Significance: The heterogeneity of MEG sources implies that multiple generators are active during huma

    Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study.

    Get PDF
    In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release) has not been well-characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity (HGA) between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus) as well as a broad P3b-like potential (between ~300 and 600 ms) with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas

    Spatiotemporal neural network dynamics for the processing of dynamic facial expressions.

    Get PDF
    表情を処理する神経ネットワークの時空間ダイナミクスを解明. 京都大学プレスリリース. 2015-07-24.The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150-200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300-350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual-motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions
    corecore