185 research outputs found
Recommended from our members
Revealing the body in the brain: an ERP method to examine sensorimotor activity during visual perception of body-related information
Examining the processing of others’ body-related information in the perceivers’ brain across the neurotypical and clinical population is a key topic in the domain of cognitive neurosciences. We argue that beyond classical neuroimaging techniques and frequency analyses, methods that can be easily adapted to capture the fast processing of body-related information in the brain are needed. Here we introduce a novel method that allows this by measuring event-related potentials recorded with electroencephalography (ERPs-EEG). This method possesses known EEG advantages (low cost, high temporal resolution, established paradigms) plus an improvement of its main limitation; i.e., spatiotemporally smoothed resolution due to mixed neural sources. This occurs when participants are presented and process images of bodies/actions that recruit posterior visual cortices. Such stimulus-evoked activity may spread and contaminate the recording of simultaneous activity arising from sensorimotor brain areas, which also process body-related information. Therefore, it is difficult to dissociate the contributing role of different brain regions. To overcome this, we propose eliciting a combination of somatosensory, motor, and visual-evoked potentials during processing of body-related information (vs. non-body-related). Next, brain activity from sensorimotor and visual systems can be dissociated by subtracting activity from trials containing only visual-evoked potentials to those trials containing either a mixture of visual and somatosensory or visual and motor-cortical potentials. This allows isolating visually driven neural activity in areas other than visual. To introduce this method, we revise recent work using this method, consider the processing of body-related stimuli in the brain, as well as outline key methodological aspects to-be-considered. This work provides a clear guideline to researchers interested or transitioning from behavioural to ERPs studies, offering the possibility to adapt well-established paradigms in the EEG realm to study others’ body-related processing in the perceiver’s own cortical body representation (e.g., examining classical EEG components in the social and embodiment frameworks)
Divergent Cortical Generators of MEG and EEG during Human Sleep Spindles Suggested by Distributed Source Modeling
Background: Sleep spindles are,1-second bursts of 10–15 Hz activity, occurring during normal stage 2 sleep. In animals, sleep spindles can be synchronous across multiple cortical and thalamic locations, suggesting a distributed stable phaselocked generating system. The high synchrony of spindles across scalp EEG sites suggests that this may also be true in humans. However, prior MEG studies suggest multiple and varying generators. Methodology/Principal Findings: We recorded 306 channels of MEG simultaneously with 60 channels of EEG during naturally occurring spindles of stage 2 sleep in 7 healthy subjects. High-resolution structural MRI was obtained in each subject, to define the shells for a boundary element forward solution and to reconstruct the cortex providing the solution space for a noise-normalized minimum norm source estimation procedure. Integrated across the entire duration of all spindles, sources estimated from EEG and MEG are similar, diffuse and widespread, including all lobes from both hemispheres. However, the locations, phase and amplitude of sources simultaneously estimated from MEG versus EEG are highly distinct during the same spindles. Specifically, the sources estimated from EEG are highly synchronous across the cortex, whereas those from MEG rapidly shift in phase, hemisphere, and the location within the hemisphere. Conclusions/Significance: The heterogeneity of MEG sources implies that multiple generators are active during huma
Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study.
In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release) has not been well-characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity (HGA) between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus) as well as a broad P3b-like potential (between ~300 and 600 ms) with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas
Spatiotemporal neural network dynamics for the processing of dynamic facial expressions.
表情を処理する神経ネットワークの時空間ダイナミクスを解明. 京都大学プレスリリース. 2015-07-24.The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150-200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300-350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual-motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions
- …