2,128 research outputs found

    Data-driven multivariate and multiscale methods for brain computer interface

    Get PDF
    This thesis focuses on the development of data-driven multivariate and multiscale methods for brain computer interface (BCI) systems. The electroencephalogram (EEG), the most convenient means to measure neurophysiological activity due to its noninvasive nature, is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its multichannel recording nature require a new set of data-driven multivariate techniques to estimate more accurately features for enhanced BCI operation. Also, a long term goal is to enable an alternative EEG recording strategy for achieving long-term and portable monitoring. Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary EEG signal into a set of components which are highly localised in time and frequency. It is shown that the complex and multivariate extensions of EMD, which can exploit common oscillatory modes within multivariate (multichannel) data, can be used to accurately estimate and compare the amplitude and phase information among multiple sources, a key for the feature extraction of BCI system. A complex extension of local mean decomposition is also introduced and its operation is illustrated on two channel neuronal spike streams. Common spatial pattern (CSP), a standard feature extraction technique for BCI application, is also extended to complex domain using the augmented complex statistics. Depending on the circularity/noncircularity of a complex signal, one of the complex CSP algorithms can be chosen to produce the best classification performance between two different EEG classes. Using these complex and multivariate algorithms, two cognitive brain studies are investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user attention to a sound source among a mixture of sound stimuli, which is aimed at improving the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments elicited by taste and taste recall are examined to determine the pleasure and displeasure of a food for the implementation of affective computing. The separation between two emotional responses is examined using real and complex-valued common spatial pattern methods. Finally, we introduce a novel approach to brain monitoring based on EEG recordings from within the ear canal, embedded on a custom made hearing aid earplug. The new platform promises the possibility of both short- and long-term continuous use for standard brain monitoring and interfacing applications

    Exploring EEG Features in Cross-Subject Emotion Recognition

    Get PDF
    Recognizing cross-subject emotions based on brain imaging data, e.g., EEG, has always been difficult due to the poor generalizability of features across subjects. Thus, systematically exploring the ability of different EEG features to identify emotional information across subjects is crucial. Prior related work has explored this question based only on one or two kinds of features, and different findings and conclusions have been presented. In this work, we aim at a more comprehensive investigation on this question with a wider range of feature types, including 18 kinds of linear and non-linear EEG features. The effectiveness of these features was examined on two publicly accessible datasets, namely, the dataset for emotion analysis using physiological signals (DEAP) and the SJTU emotion EEG dataset (SEED). We adopted the support vector machine (SVM) approach and the "leave-one-subject-out" verification strategy to evaluate recognition performance. Using automatic feature selection methods, the highest mean recognition accuracy of 59.06% (AUC = 0.605) on the DEAP dataset and of 83.33% (AUC = 0.904) on the SEED dataset were reached. Furthermore, using manually operated feature selection on the SEED dataset, we explored the importance of different EEG features in cross-subject emotion recognition from multiple perspectives, including different channels, brain regions, rhythms, and feature types. For example, we found that the Hjorth parameter of mobility in the beta rhythm achieved the best mean recognition accuracy compared to the other features. Through a pilot correlation analysis, we further examined the highly correlated features, for a better understanding of the implications hidden in those features that allow for differentiating cross-subject emotions. Various remarkable observations have been made. The results of this paper validate the possibility of exploring robust EEG features in cross-subject emotion recognition

    Translation of EEG spatial filters from resting to motor imagery using independent component analysis.

    Get PDF
    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) often use spatial filters to improve signal-to-noise ratio of task-related EEG activities. To obtain robust spatial filters, large amounts of labeled data, which are often expensive and labor-intensive to obtain, need to be collected in a training procedure before online BCI control. Several studies have recently developed zero-training methods using a session-to-session scenario in order to alleviate this problem. To our knowledge, a state-to-state translation, which applies spatial filters derived from one state to another, has never been reported. This study proposes a state-to-state, zero-training method to construct spatial filters for extracting EEG changes induced by motor imagery. Independent component analysis (ICA) was separately applied to the multi-channel EEG in the resting and the motor imagery states to obtain motor-related spatial filters. The resultant spatial filters were then applied to single-trial EEG to differentiate left- and right-hand imagery movements. On a motor imagery dataset collected from nine subjects, comparable classification accuracies were obtained by using ICA-based spatial filters derived from the two states (motor imagery: 87.0%, resting: 85.9%), which were both significantly higher than the accuracy achieved by using monopolar scalp EEG data (80.4%). The proposed method considerably increases the practicality of BCI systems in real-world environments because it is less sensitive to electrode misalignment across different sessions or days and does not require annotated pilot data to derive spatial filters

    An Investigation of How Wavelet Transform can Affect the Correlation Performance of Biomedical Signals : The Correlation of EEG and HRV Frequency Bands in the frontal lobe of the brain

    Get PDF
    © 2018 by SCITEPRESS – Science and Technology Publications, Lda. All rights reservedRecently, the correlation between biomedical signals, such as electroencephalograms (EEG) and electrocardiograms (ECG) time series signals, has been analysed using the Pearson Correlation method. Although Wavelet Transformations (WT) have been performed on time series data including EEG and ECG signals, so far the correlation between WT signals has not been analysed. This research shows the correlation between the EEG and HRV, with and without WT signals. Our results suggest electrical activity in the frontal lobe of the brain is best correlated with the HRV.We assume this is because the frontal lobe is related to higher mental functions of the cerebral cortex and responsible for muscle movements of the body. Our results indicate a positive correlation between Delta, Alpha and Beta frequencies of EEG at both low frequency (LF) and high frequency (HF) of HRV. This finding is independent of both participants and brain hemisphere.Final Published versio

    Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring

    Get PDF
    How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal

    Overcoming Inter-Subject Variability in BCI Using EEG-Based Identification

    Get PDF
    The high dependency of the Brain Computer Interface (BCI) system performance on the BCI user is a well-known issue of many BCI devices. This contribution presents a new way to overcome this problem using a synergy between a BCI device and an EEG-based biometric algorithm. Using the biometric algorithm, the BCI device automatically identifies its current user and adapts parameters of the classification process and of the BCI protocol to maximize the BCI performance. In addition to this we present an algorithm for EEG-based identification designed to be resistant to variations in EEG recordings between sessions, which is also demonstrated by an experiment with an EEG database containing two sessions recorded one year apart. Further, our algorithm is designed to be compatible with our movement-related BCI device and the evaluation of the algorithm performance took place under conditions of a standard BCI experiment. Estimation of the mu rhythm fundamental frequency using the Frequency Zooming AR modeling is used for EEG feature extraction followed by a classifier based on the regularized Mahalanobis distance. An average subject identification score of 96 % is achieved

    Electroencephalographic Signal Processing and Classification Techniques for Noninvasive Motor Imagery Based Brain Computer Interface

    Get PDF
    In motor imagery (MI) based brain-computer interface (BCI), success depends on reliable processing of the noisy, non-linear, and non-stationary brain activity signals for extraction of features and effective classification of MI activity as well as translation to the corresponding intended actions. In this study, signal processing and classification techniques are presented for electroencephalogram (EEG) signals for motor imagery based brain-computer interface. EEG signals have been acquired placing the electrodes following the international 10-20 system. The acquired signals have been pre-processed removing artifacts using empirical mode decomposition (EMD) and two extended versions of EMD, ensemble empirical mode decomposition (EEMD), and multivariate empirical mode decomposition (MEMD) leading to better signal to noise ratio (SNR) and reduced mean square error (MSE) compared to independent component analysis (ICA). EEG signals have been decomposed into independent mode function (IMFs) that are further processed to extract features like sample entropy (SampEn) and band power (BP). The extracted features have been used in support vector machines to characterize and identify MI activities. EMD and its variants, EEMD, MEMD have been compared with common spatial pattern (CSP) for different MI activities. SNR values from EMD, EEMD and MEMD (4.3, 7.64, 10.62) are much better than ICA (2.1) but accuracy of MI activity identification is slightly better for ICA than EMD using BP and SampEn. Further work is outlined to include more features with larger database for better classification accuracy

    A Python-based Brain-Computer Interface Package for Neural Data Analysis

    Get PDF
    Anowar, Md Hasan, A Python-based Brain-Computer Interface Package for Neural Data Analysis. Master of Science (MS), December, 2020, 70 pp., 4 tables, 23 figures, 74 references. Although a growing amount of research has been dedicated to neural engineering, only a handful of software packages are available for brain signal processing. Popular brain-computer interface packages depend on commercial software products such as MATLAB. Moreover, almost every brain-computer interface software is designed for a specific neuro-biological signal; there is no single Python-based package that supports motor imagery, sleep, and stimulated brain signal analysis. The necessity to introduce a brain-computer interface package that can be a free alternative for commercial software has motivated me to develop a toolbox using the python platform. In this thesis, the structure of MEDUSA, a brain-computer interface toolbox, is presented. The features of the toolbox are demonstrated with publicly available data sources. The MEDUSA toolbox provides a valuable tool to biomedical engineers and computational neuroscience researchers
    corecore