335 research outputs found

    Within-Subject Joint Independent Component Analysis of Simultaneous fMRI/ERP in an Auditory Oddball Paradigm

    Get PDF
    The integration of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) can contribute to characterizing neural networks with high temporal and spatial resolution. This research aimed to determine the sensitivity and limitations of applying joint independent component analysis (jICA) within-subjects, for ERP and fMRI data collected simultaneously in a parametric auditory frequency oddball paradigm. In a group of 20 subjects, an increase in ERP peak amplitude ranging 1–8 μV in the time window of the P300 (350–700 ms), and a correlated increase in fMRI signal in a network of regions including the right superior temporal and supramarginal gyri, was observed with the increase in deviant frequency difference. JICA of the same ERP and fMRI group data revealed activity in a similar network, albeit with stronger amplitude and larger extent. In addition, activity in the left pre- and post-central gyri, likely associated with right hand somato-motor response, was observed only with the jICA approach. Within-subject, the jICA approach revealed significantly stronger and more extensive activity in the brain regions associated with the auditory P300 than the P300 linear regression analysis. The results suggest that with the incorporation of spatial and temporal information from both imaging modalities, jICA may be a more sensitive method for extracting common sources of activity between ERP and fMRI

    Medical Image Segmentation Based on Multi-Modal Convolutional Neural Network: Study on Image Fusion Schemes

    Full text link
    Image analysis using more than one modality (i.e. multi-modal) has been increasingly applied in the field of biomedical imaging. One of the challenges in performing the multimodal analysis is that there exist multiple schemes for fusing the information from different modalities, where such schemes are application-dependent and lack a unified framework to guide their designs. In this work we firstly propose a conceptual architecture for the image fusion schemes in supervised biomedical image analysis: fusing at the feature level, fusing at the classifier level, and fusing at the decision-making level. Further, motivated by the recent success in applying deep learning for natural image analysis, we implement the three image fusion schemes above based on the Convolutional Neural Network (CNN) with varied structures, and combined into a single framework. The proposed image segmentation framework is capable of analyzing the multi-modality images using different fusing schemes simultaneously. The framework is applied to detect the presence of soft tissue sarcoma from the combination of Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET) images. It is found from the results that while all the fusion schemes outperform the single-modality schemes, fusing at the feature level can generally achieve the best performance in terms of both accuracy and computational cost, but also suffers from the decreased robustness in the presence of large errors in any image modalities.Comment: Zhe Guo and Xiang Li contribute equally to this wor

    EEG Based Inference of Spatio-Temporal Brain Dynamics

    Get PDF

    Feature analysis of functional MRI data for mapping epileptic networks

    Get PDF
    Issued as final reportUniversity of Pennsylvani

    A Novel Synergistic Model Fusing Electroencephalography and Functional Magnetic Resonance Imaging for Modeling Brain Activities

    Get PDF
    Study of the human brain is an important and very active area of research. Unraveling the way the human brain works would allow us to better understand, predict and prevent brain related diseases that affect a significant part of the population. Studying the brain response to certain input stimuli can help us determine the involved brain areas and understand the mechanisms that characterize behavioral and psychological traits. In this research work two methods used for the monitoring of brain activities, Electroencephalography (EEG) and functional Magnetic Resonance (fMRI) have been studied for their fusion, in an attempt to bridge together the advantages of each one. In particular, this work has focused in the analysis of a specific type of EEG and fMRI recordings that are related to certain events and capture the brain response under specific experimental conditions. Using spatial features of the EEG we can describe the temporal evolution of the electrical field recorded in the scalp of the head. This work introduces the use of Hidden Markov Models (HMM) for modeling the EEG dynamics. This novel approach is applied for the discrimination of normal and progressive Mild Cognitive Impairment patients with significant results. EEG alone is not able to provide the spatial localization needed to uncover and understand the neural mechanisms and processes of the human brain. Functional Magnetic Resonance imaging (fMRI) provides the means of localizing functional activity, without though, providing the timing details of these activations. Although, at first glance it is apparent that the strengths of these two modalities, EEG and fMRI, complement each other, the fusion of information provided from each one is a challenging task. A novel methodology for fusing EEG spatiotemporal features and fMRI features, based on Canonical Partial Least Squares (CPLS) is presented in this work. A HMM modeling approach is used in order to derive a novel feature-based representation of the EEG signal that characterizes the topographic information of the EEG. We use the HMM model in order to project the EEG data in the Fisher score space and use the Fisher score to describe the dynamics of the EEG topography sequence. The correspondence between this new feature and the fMRI is studied using CPLS. This methodology is applied for extracting features for the classification of a visual task. The results indicate that the proposed methodology is able to capture task related activations that can be used for the classification of mental tasks. Extensions on the proposed models are examined along with future research directions and applications

    Co-localization of theta-band activity and hemodynamic responses during face perception: simultaneous electroencephalography and functional near-infrared spectroscopy recordings

    Get PDF
    Face-specific neural processes in the human brain have been localized to multiple anatomical structures and associated with diverse and dynamic social functions. The question of how various face-related systems and functions may be bound together remains an active area of investigation. We hypothesize that face processing may be associated with specific frequency band oscillations that serve to integrate distributed face processing systems. Using a multimodal imaging approach, including electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), simultaneous signals were acquired during face and object picture viewing. As expected for face processing, hemodynamic activity in the right occipital face area (OFA) increased during face viewing compared to object viewing, and in a subset of participants, the expected N170 EEG response was observed for faces. Based on recently reported associations between the theta band and visual processing, we hypothesized that increased hemodynamic activity in a face processing area would also be associated with greater theta-band activity originating in the same area. Consistent with our hypothesis, theta-band oscillations were also localized to the right OFA for faces, whereas alpha- and beta-band oscillations were not. Together, these findings suggest that theta-band oscillations originating in the OFA may be part of the distributed face-specific processing mechanism
    corecore