353 research outputs found

    Within-Subject Joint Independent Component Analysis of Simultaneous fMRI/ERP in an Auditory Oddball Paradigm

    Get PDF
    The integration of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) can contribute to characterizing neural networks with high temporal and spatial resolution. This research aimed to determine the sensitivity and limitations of applying joint independent component analysis (jICA) within-subjects, for ERP and fMRI data collected simultaneously in a parametric auditory frequency oddball paradigm. In a group of 20 subjects, an increase in ERP peak amplitude ranging 1–8 μV in the time window of the P300 (350–700 ms), and a correlated increase in fMRI signal in a network of regions including the right superior temporal and supramarginal gyri, was observed with the increase in deviant frequency difference. JICA of the same ERP and fMRI group data revealed activity in a similar network, albeit with stronger amplitude and larger extent. In addition, activity in the left pre- and post-central gyri, likely associated with right hand somato-motor response, was observed only with the jICA approach. Within-subject, the jICA approach revealed significantly stronger and more extensive activity in the brain regions associated with the auditory P300 than the P300 linear regression analysis. The results suggest that with the incorporation of spatial and temporal information from both imaging modalities, jICA may be a more sensitive method for extracting common sources of activity between ERP and fMRI

    A Dialogue on England: The England Case, Its Effect on the Abstention Doctrine, and Some Suggested Solutions

    Get PDF

    Neural Dynamics of Phonological Processing in the Dorsal Auditory Stream

    Get PDF
    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80–100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors

    Neural pathways for visual speech perception

    Get PDF
    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA

    Method for Spatial Overlap Estimation of Electroencephalography and Functional Magnetic Resonance Imaging Responses

    Get PDF
    Background Simultaneous functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) measurements may represent activity from partially divergent neural sources, but this factor is seldom modeled in fMRI-EEG data integration. New method This paper proposes an approach to estimate the spatial overlap between sources of activity measured simultaneously with fMRI and EEG. Following the extraction of task-related activity, the key steps include, 1) distributed source reconstruction of the task-related ERP activity (ERP source model), 2) transformation of fMRI activity to the ERP spatial scale by forward modelling of the scalp potential field distribution and backward source reconstruction (fMRI source simulation), and 3) optimization of fMRI and ERP thresholds to maximize spatial overlap without a priori constraints of coupling (overlap calculation). Results FMRI and ERP responses were recorded simultaneously in 15 subjects performing an auditory oddball task. A high degree of spatial overlap between sources of fMRI and ERP responses (in 9 or more of 15 subjects) was found specifically within temporoparietal areas associated with the task. Areas of non-overlap in fMRI and ERP sources were relatively small and inconsistent across subjects. Comparison with existing method The ERP and fMRI sources estimated with solely jICA overlapped in just 4 of 15 subjects, and strictly in the parietal cortex. Conclusion The study demonstrates that the new fMRI-ERP spatial overlap estimation method provides greater spatiotemporal detail of the cortical dynamics than solely jICA. As such, we propose that it is a superior method for the integration of fMRI and EEG to study brain function

    Assessing the Effects of Orbital Relaxation and the Coherent-State Transformation in Quantum Electrodynamics Density Functional and Coupled-Cluster Theories

    Full text link
    Cavity quantum electrodynamics (QED) generalizations of time-dependent (TD) density functional theory (DFT) and equation-of-motion (EOM) coupled-cluster (CC) theory are used to model small molecules strongly coupled to optical cavity modes. We consider two types of calculations. In the first approach (termed "relaxed"), we use a coherent-state-transformed Hamiltonian within the ground- and excited-state portions of the calculations, and cavity-induced orbital relaxation effects are included at the mean-field level. This procedure guarantees that the energy is origin invariant in post-self-consistent-field calculations. In the second approach (termed "unrelaxed"), we ignore the coherent-state transformation and the associated orbital relaxation effects. In this case, ground-state unrelaxed QED-CC calculations pick up a modest origin dependence but otherwise reproduce relaxed QED-CC results within the coherent-state basis. On the other hand, a severe origin dependence manifests in ground-state unrelaxed QED mean-field energies. For excitation energies computed at experimentally realizable coupling strengths, relaxed and unrelaxed QED-EOM-CC results are similar, while significant differences emerge for unrelaxed and relaxed QED-TDDFT. First, QED-EOM-CC and relaxed QED-TDDFT both predict that electronic states that are not resonant with the cavity mode are nonetheless perturbed by the cavity. Unrelaxed QED-TDDFT, on the other hand, fails to capture this effect. Second, in the limit of large coupling strengths, relaxed QED-TDDFT tends to overestimate Rabi splittings, while unrelaxed QED-TDDFT underestimates them, given splittings from relaxed QED-EOM-CC as a reference, and relaxed QED-TDDFT generally does the better job of reproducing the QED-EOM-CC results
    • …
    corecore