844 research outputs found

    A Novel Technique for Selecting EMG-Contaminated EEG Channels in Self-Paced Brain-Computer Interface Task Onset

    Get PDF
    Electromyography artefacts are a well-known problem in Electroencephalography studies (BCIs, brain mapping, and clinical areas). Blind source separation (BSS) techniques are commonly used to handle artefacts. However, these may remove not only EMG artefacts but also some useful EEG sources. To reduce this useful information loss, we propose a new technique for statistically selecting EEG channels that are contaminated with class-dependent EMG (henceforth called EMG-CCh). Methods: The EMG-CCh are selected based on the correlation between EEG and facial EMG channels. They were compared (using a Wilcoxon test) to determine whether the artefacts played a significant role in class separation. To ensure that promising results are not due to weak EMG removal, reliability tests were done. Results: In our data set, the comparison results between BSS artefact removal applied in two ways, to all channels and only to EMG-CCh, showed that ICA, PCA and BSS-CCA can yield significantly better (p<0.05) class separation with the proposed method (79% of the cases for ICA, 53% for PCA and 11% for BSS-CCA). With BCI competition data, we saw improvement in 60% of the cases for ICA and BSS-CCA. Conclusion: The simple method proposed in this paper showed improvement in class separation with both our data and the BCI competition data. Significance: There are no existing methods for removing EMG artefacts based on the correlation between EEG and EMG channels. Also, the EMG-CCh selection can be used on its own or it can be combined with pre-existing artefact handling methods. For these reasons, we believe this method can be useful for other EEG studies

    Improved physiological noise regression in fNIRS: a multimodal extension of the General Linear Model using temporally embedded Canonical Correlation Analysis

    Get PDF
    For the robust estimation of evoked brain activity from functional Near-Infrared Spectroscopy (fNIRS) signals, it is crucial to reduce nuisance signals from systemic physiology and motion. The current best practice incorporates short-separation (SS) fNIRS measurements as regressors in a General Linear Model (GLM). However, several challenging signal characteristics such as non-instantaneous and non-constant coupling are not yet addressed by this approach and additional auxiliary signals are not optimally exploited. We have recently introduced a new methodological framework for the unsupervised multivariate analysis of fNIRS signals using Blind Source Separation (BSS) methods. Building onto the framework, in this manuscript we show how to incorporate the advantages of regularized temporally embedded Canonical Correlation Analysis (tCCA) into the supervised GLM. This approach allows flexible integration of any number of auxiliary modalities and signals. We provide guidance for the selection of optimal parameters and auxiliary signals for the proposed GLM extension. Its performance in the recovery of evoked HRFs is then evaluated using both simulated ground truth data and real experimental data and compared with the GLM with short-separation regression. Our results show that the GLM with tCCA significantly improves upon the current best practice, yielding significantly better results across all applied metrics: Correlation (HbO max. +45%), Root Mean Squared Error (HbO max. -55%), F-Score (HbO up to 3.25-fold) and p-value as well as power spectral density of the noise floor. The proposed method can be incorporated into the GLM in an easily applicable way that flexibly combines any available auxiliary signals into optimal nuisance regressors. This work has potential significance both for conventional neuroscientific fNIRS experiments as well as for emerging applications of fNIRS in everyday environments, medicine and BCI, where high Contrast to Noise Ratio is of importance for single trial analysis.Published versio

    Artifact Removal Methods in EEG Recordings: A Review

    Get PDF
    To obtain the correct analysis of electroencephalogram (EEG) signals, non-physiological and physiological artifacts should be removed from EEG signals. This study aims to give an overview on the existing methodology for removing physiological artifacts, e.g., ocular, cardiac, and muscle artifacts. The datasets, simulation platforms, and performance measures of artifact removal methods in previous related research are summarized. The advantages and disadvantages of each technique are discussed, including regression method, filtering method, blind source separation (BSS), wavelet transform (WT), empirical mode decomposition (EMD), singular spectrum analysis (SSA), and independent vector analysis (IVA). Also, the applications of hybrid approaches are presented, including discrete wavelet transform - adaptive filtering method (DWT-AFM), DWT-BSS, EMD-BSS, singular spectrum analysis - adaptive noise canceler (SSA-ANC), SSA-BSS, and EMD-IVA. Finally, a comparative analysis for these existing methods is provided based on their performance and merits. The result shows that hybrid methods can remove the artifacts more effectively than individual methods

    A fast approach to removing muscle artifacts for EEG with signal serialization based Ensemble Empirical Mode Decomposition

    Get PDF
    An electroencephalogram (EEG) is an electrophysiological signal reflecting the functional state of the brain. As the control signal of the brain-computer interface (BCI), EEG may build a bridge between humans and computers to improve the life quality for patients with movement disorders. The collected EEG signals are extremely susceptible to the contamination of electromyography (EMG) artifacts, affecting their original characteristics. Therefore, EEG denoising is an essential preprocessing step in any BCI system. Previous studies have confirmed that the combination of ensemble empirical mode decomposition (EEMD) and canonical correlation analysis (CCA) can effectively suppress EMG artifacts. However, the time-consuming iterative process of EEMD limits the application of the EEMD-CCA method in real-time monitoring of BCI. Compared with the existing EEMD, the recently proposed signal serialization based EEMD (sEEMD) is a good choice to provide effective signal analysis and fast mode decomposition. In this study, an EMG denoising method based on sEEMD and CCA is discussed. All of the analyses are carried out on semi-simulated data. The results show that, in terms of frequency and amplitude, the intrinsic mode functions (IMFs) decomposed by sEEMD are consistent with the IMFs obtained by EEMD. There is no significant difference in the ability to separate EMG artifacts from EEG signals between the sEEMD-CCA method and the EEMD-CCA method (p > 0.05). Even in the case of heavy contamination (signal-to-noise ratio is less than 2 dB), the relative root mean squared error is about 0.3, and the average correlation coefficient remains above 0.9. The running speed of the sEEMD-CCA method to remove EMG artifacts is significantly improved in comparison with that of EEMD-CCA method (p < 0.05). The running time of the sEEMD-CCA method for three lengths of semi-simulated data is shortened by more than 50%. This indicates that sEEMD-CCA is a promising tool for EMG artifact removal in real-time BCI systems.Fil: Dai, Yangyang. Nankai University; ChinaFil: Duan, Feng. Nankai University; ChinaFil: Feng, Fan. Nankai University; ChinaFil: Sun, Zhe. RIKEN; JapónFil: Zhang, Yu. Lehigh University Bethlehem; Estados UnidosFil: Caiafa, César Federico. Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas. Instituto Argentino de Radioastronomía. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto Argentino de Radioastronomía; ArgentinaFil: Marti Puig, Pere. Central University of Catalonia; EspañaFil: Solé Casals, Jordi. Central University of Catalonia; Españ

    Defining the Plasticity of Transcription Factor Binding Sites by Deconstructing DNA Consensus Sequences: The PhoP-Binding Sites among Gamma/Enterobacteria

    Get PDF
    Transcriptional regulators recognize specific DNA sequences. Because these sequences are embedded in the background of genomic DNA, it is hard to identify the key cis-regulatory elements that determine disparate patterns of gene expression. The detection of the intra- and inter-species differences among these sequences is crucial for understanding the molecular basis of both differential gene expression and evolution. Here, we address this problem by investigating the target promoters controlled by the DNA-binding PhoP protein, which governs virulence and Mg2+ homeostasis in several bacterial species. PhoP is particularly interesting; it is highly conserved in different gamma/enterobacteria, regulating not only ancestral genes but also governing the expression of dozens of horizontally acquired genes that differ from species to species. Our approach consists of decomposing the DNA binding site sequences for a given regulator into families of motifs (i.e., termed submotifs) using a machine learning method inspired by the “Divide & Conquer” strategy. By partitioning a motif into sub-patterns, computational advantages for classification were produced, resulting in the discovery of new members of a regulon, and alleviating the problem of distinguishing functional sites in chromatin immunoprecipitation and DNA microarray genome-wide analysis. Moreover, we found that certain partitions were useful in revealing biological properties of binding site sequences, including modular gains and losses of PhoP binding sites through evolutionary turnover events, as well as conservation in distant species. The high conservation of PhoP submotifs within gamma/enterobacteria, as well as the regulatory protein that recognizes them, suggests that the major cause of divergence between related species is not due to the binding sites, as was previously suggested for other regulators. Instead, the divergence may be attributed to the fast evolution of orthologous target genes and/or the promoter architectures resulting from the interaction of those binding sites with the RNA polymerase

    A Scalable Approach to Independent Vector Analysis by Shared Subspace Separation for Multi-Subject fMRI Analysis

    Get PDF
    [Abstract]: Joint blind source separation (JBSS) has wide applications in modeling latent structures across multiple related datasets. However, JBSS is computationally prohibitive with high-dimensional data, limiting the number of datasets that can be included in a tractable analysis. Furthermore, JBSS may not be effective if the data’s true latent dimensionality is not adequately modeled, where severe overparameterization may lead to poor separation and time performance. In this paper, we propose a scalable JBSS method by modeling and separating the “shared” subspace from the data. The shared subspace is defined as the subset of latent sources that exists across all datasets, represented by groups of sources that collectively form a low-rank structure. Our method first provides the efficient initialization of the independent vector analysis (IVA) with a multivariate Gaussian source prior (IVA-G) specifically designed to estimate the shared sources. Estimated sources are then evaluated regarding whether they are shared, upon which further JBSS is applied separately to the shared and non-shared sources. This provides an effective means to reduce the dimensionality of the problem, improving analyses with larger numbers of datasets. We apply our method to resting-state fMRI datasets, demonstrating that our method can achieve an excellent estimation performance with significantly reduced computational costs.The computational hardware used is part of the UMBC High Performance Computing Facility (HPCF), supported by the US NSF through the MRI and SCREMS programs (grants CNS-0821258, CNS-1228778, OAC-1726023, CNS-1920079, DMS-0821311), with additional substantial support from the University of Maryland, Baltimore County (UMBC). This work was supported by the grants NIH R01 MH118695, NIH R01 MH123610, and NIH R01 AG073949. Xunta de Galicia was supported by a postdoctoral grant No. ED481B 2022/012 and the Fulbright Program, sponsored by the US Department of State.Xunta de Galicia; ED481B 2022/01

    Data Analytics in Steady-State Visual Evoked Potential-based Brain-Computer Interface: A Review

    Get PDF
    Electroencephalograph (EEG) has been widely applied for brain-computer interface (BCI) which enables paralyzed people to directly communicate with and control of external devices, due to its portability, high temporal resolution, ease of use and low cost. Of various EEG paradigms, steady-state visual evoked potential (SSVEP)-based BCI system which uses multiple visual stimuli (such as LEDs or boxes on a computer screen) flickering at different frequencies has been widely explored in the past decades due to its fast communication rate and high signal-to-noise ratio. In this paper, we review the current research in SSVEP-based BCI, focusing on the data analytics that enables continuous, accurate detection of SSVEPs and thus high information transfer rate. The main technical challenges, including signal pre-processing, spectrum analysis, signal decomposition, spatial filtering in particular canonical correlation analysis and its variations, and classification techniques are described in this paper. Research challenges and opportunities in spontaneous brain activities, mental fatigue, transfer learning as well as hybrid BCI are also discussed

    Simulating model uncertainty of subgrid-scale processes by sampling model errors at convective scales

    Get PDF
    Ideally, perturbation schemes in ensemble forecasts should be based on the statistical properties of the model errors. Often, however, the statistical properties of these model errors are unknown. In practice, the perturbations are pragmatically modelled and tuned to maximize the skill of the ensemble forecast. In this paper a general methodology is developed to diagnose the model error, linked to a specific physical process, based on a comparison between a target and a reference model. Here, the reference model is a configuration of the ALADIN (Aire Limitée Adaptation Dynamique Développement International) model with a parameterization of deep convection. This configuration is also run with the deep-convection parameterization scheme switched off, degrading the forecast skill. The model error is then defined as the difference of the energy and mass fluxes between the reference model with scale-aware deep-convection parameterization and the target model without deep-convection parameterization. In the second part of the paper, the diagnosed model-error characteristics are used to stochastically perturb the fluxes of the target model by sampling the model errors from a training period in such a way that the distribution and the vertical and multivariate correlation within a grid column are preserved. By perturbing the fluxes it is guaranteed that the total mass, heat and momentum are conserved. The tests, performed over the period 11–20 April 2009, show that the ensemble system with the stochastic flux perturbations combined with the initial condition perturbations not only outperforms the target ensemble, where deep convection is not parameterized, but for many variables it even performs better than the reference ensemble (with scale-aware deep-convection scheme). The introduction of the stochastic flux perturbations reduces the small-scale erroneous spread while increasing the overall spread, leading to a more skillful ensemble. The impact is largest in the upper troposphere with substantial improvements compared to other state-of-the-art stochastic perturbation schemes. At lower levels the improvements are smaller or neutral, except for temperature where the forecast skill is degraded
    corecore