536 research outputs found
Frequency Recognition in SSVEP-based BCI using Multiset Canonical Correlation Analysis
Canonical correlation analysis (CCA) has been one of the most popular methods
for frequency recognition in steady-state visual evoked potential (SSVEP)-based
brain-computer interfaces (BCIs). Despite its efficiency, a potential problem
is that using pre-constructed sine-cosine waves as the required reference
signals in the CCA method often does not result in the optimal recognition
accuracy due to their lack of features from the real EEG data. To address this
problem, this study proposes a novel method based on multiset canonical
correlation analysis (MsetCCA) to optimize the reference signals used in the
CCA method for SSVEP frequency recognition. The MsetCCA method learns multiple
linear transforms that implement joint spatial filtering to maximize the
overall correlation among canonical variates, and hence extracts SSVEP common
features from multiple sets of EEG data recorded at the same stimulus
frequency. The optimized reference signals are formed by combination of the
common features and completely based on training data. Experimental study with
EEG data from ten healthy subjects demonstrates that the MsetCCA method
improves the recognition accuracy of SSVEP frequency in comparison with the CCA
method and other two competing methods (multiway CCA (MwayCCA) and phase
constrained CCA (PCCA)), especially for a small number of channels and a short
time window length. The superiority indicates that the proposed MsetCCA method
is a new promising candidate for frequency recognition in SSVEP-based BCIs
A Novel Synergistic Model Fusing Electroencephalography and Functional Magnetic Resonance Imaging for Modeling Brain Activities
Study of the human brain is an important and very active area of research. Unraveling the way the human brain works would allow us to better understand, predict and prevent brain related diseases that affect a significant part of the population. Studying the brain response to certain input stimuli can help us determine the involved brain areas and understand the mechanisms that characterize behavioral and psychological traits.
In this research work two methods used for the monitoring of brain activities, Electroencephalography (EEG) and functional Magnetic Resonance (fMRI) have been studied for their fusion, in an attempt to bridge together the advantages of each one. In particular, this work has focused in the analysis of a specific type of EEG and fMRI recordings that are related to certain events and capture the brain response under specific experimental conditions.
Using spatial features of the EEG we can describe the temporal evolution of the electrical field recorded in the scalp of the head. This work introduces the use of Hidden Markov Models (HMM) for modeling the EEG dynamics. This novel approach is applied for the discrimination of normal and progressive Mild Cognitive Impairment patients with significant results.
EEG alone is not able to provide the spatial localization needed to uncover and understand the neural mechanisms and processes of the human brain. Functional Magnetic Resonance imaging (fMRI) provides the means of localizing functional activity, without though, providing the timing details of these activations. Although, at first glance it is apparent that the strengths of these two modalities, EEG and fMRI, complement each other, the fusion of information provided from each one is a challenging task. A novel methodology for fusing EEG spatiotemporal features and fMRI features, based on Canonical Partial Least Squares (CPLS) is presented in this work. A HMM modeling approach is used in order to derive a novel feature-based representation of the EEG signal that characterizes the topographic information of the EEG. We use the HMM model in order to project the EEG data in the Fisher score space and use the Fisher score to describe the dynamics of the EEG topography sequence. The correspondence between this new feature and the fMRI is studied using CPLS. This methodology is applied for extracting features for the classification of a visual task. The results indicate that the proposed methodology is able to capture task related activations that can be used for the classification of mental tasks. Extensions on the proposed models are examined along with future research directions and applications
Tensor Analysis and Fusion of Multimodal Brain Images
Current high-throughput data acquisition technologies probe dynamical systems
with different imaging modalities, generating massive data sets at different
spatial and temporal resolutions posing challenging problems in multimodal data
fusion. A case in point is the attempt to parse out the brain structures and
networks that underpin human cognitive processes by analysis of different
neuroimaging modalities (functional MRI, EEG, NIRS etc.). We emphasize that the
multimodal, multi-scale nature of neuroimaging data is well reflected by a
multi-way (tensor) structure where the underlying processes can be summarized
by a relatively small number of components or "atoms". We introduce
Markov-Penrose diagrams - an integration of Bayesian DAG and tensor network
notation in order to analyze these models. These diagrams not only clarify
matrix and tensor EEG and fMRI time/frequency analysis and inverse problems,
but also help understand multimodal fusion via Multiway Partial Least Squares
and Coupled Matrix-Tensor Factorization. We show here, for the first time, that
Granger causal analysis of brain networks is a tensor regression problem, thus
allowing the atomic decomposition of brain networks. Analysis of EEG and fMRI
recordings shows the potential of the methods and suggests their use in other
scientific domains.Comment: 23 pages, 15 figures, submitted to Proceedings of the IEE
Concurrent fNIRS and EEG for brain function investigation: A systematic, methodology-focused review
Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) stand as state-of-the-art techniques for non-invasive functional neuroimaging. On a unimodal basis, EEG has poor spatial resolution while presenting high temporal resolution. In contrast, fNIRS offers better spatial resolution, though it is constrained by its poor temporal resolution. One important merit shared by the EEG and fNIRS is that both modalities have favorable portability and could be integrated into a compatible experimental setup, providing a compelling ground for the development of a multimodal fNIRS-EEG integration analysis approach. Despite a growing number of studies using concurrent fNIRS-EEG designs reported in recent years, the methodological reference of past studies remains unclear. To fill this knowledge gap, this review critically summarizes the status of analysis methods currently used in concurrent fNIRS-EEG studies, providing an up-to-date overview and guideline for future projects to conduct concurrent fNIRS-EEG studies. A literature search was conducted using PubMed and Web of Science through 31 August 2021. After screening and qualification assessment, 92 studies involving concurrent fNIRS-EEG data recordings and analyses were included in the final methodological review. Specifically, three methodological categories of concurrent fNIRS-EEG data analyses, including EEG-informed fNIRS analyses, fNIRS-informed EEG analyses, and parallel fNIRS-EEG analyses, were identified and explained with detailed description. Finally, we highlighted current challenges and potential directions in concurrent fNIRS-EEG data analyses in future research
Within-Subject Joint Independent Component Analysis of Simultaneous fMRI/ERP in an Auditory Oddball Paradigm
The integration of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) can contribute to characterizing neural networks with high temporal and spatial resolution. This research aimed to determine the sensitivity and limitations of applying joint independent component analysis (jICA) within-subjects, for ERP and fMRI data collected simultaneously in a parametric auditory frequency oddball paradigm. In a group of 20 subjects, an increase in ERP peak amplitude ranging 1–8 μV in the time window of the P300 (350–700 ms), and a correlated increase in fMRI signal in a network of regions including the right superior temporal and supramarginal gyri, was observed with the increase in deviant frequency difference. JICA of the same ERP and fMRI group data revealed activity in a similar network, albeit with stronger amplitude and larger extent. In addition, activity in the left pre- and post-central gyri, likely associated with right hand somato-motor response, was observed only with the jICA approach. Within-subject, the jICA approach revealed significantly stronger and more extensive activity in the brain regions associated with the auditory P300 than the P300 linear regression analysis. The results suggest that with the incorporation of spatial and temporal information from both imaging modalities, jICA may be a more sensitive method for extracting common sources of activity between ERP and fMRI
Challenges in Multimodal Data Fusion
International audienceIn various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, different observations times, in multiple experiments or subjects, etc. We use the term "modality" to denote each such type of acquisition framework. Due to the rich characteristics of natural phenomena, as well as of the environments in which they occur, it is rare that a single modality can provide complete knowledge of the phenomenon of interest. The increasing availability of several modalities at once introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. It is the aim of this paper to evoke and promote various challenges in multimodal data fusion at the conceptual level, without focusing on any specific model, method or application
Medical Image Segmentation Based on Multi-Modal Convolutional Neural Network: Study on Image Fusion Schemes
Image analysis using more than one modality (i.e. multi-modal) has been
increasingly applied in the field of biomedical imaging. One of the challenges
in performing the multimodal analysis is that there exist multiple schemes for
fusing the information from different modalities, where such schemes are
application-dependent and lack a unified framework to guide their designs. In
this work we firstly propose a conceptual architecture for the image fusion
schemes in supervised biomedical image analysis: fusing at the feature level,
fusing at the classifier level, and fusing at the decision-making level.
Further, motivated by the recent success in applying deep learning for natural
image analysis, we implement the three image fusion schemes above based on the
Convolutional Neural Network (CNN) with varied structures, and combined into a
single framework. The proposed image segmentation framework is capable of
analyzing the multi-modality images using different fusing schemes
simultaneously. The framework is applied to detect the presence of soft tissue
sarcoma from the combination of Magnetic Resonance Imaging (MRI), Computed
Tomography (CT) and Positron Emission Tomography (PET) images. It is found from
the results that while all the fusion schemes outperform the single-modality
schemes, fusing at the feature level can generally achieve the best performance
in terms of both accuracy and computational cost, but also suffers from the
decreased robustness in the presence of large errors in any image modalities.Comment: Zhe Guo and Xiang Li contribute equally to this wor
- …