289 research outputs found

    Effective EEG analysis for advanced AI-driven motor imagery BCI systems

    Get PDF
    Developing effective signal processing for brain-computer interfaces (BCIs) and brain-machine interfaces (BMIs) involves factoring in three aspects of functionality: classification performance, execution time, and the number of data channels used. The contributions in this thesis are centered on these three issues. Contributions are focused on the classification of motor imagery (MI) data, which is generated during imagined movements. Typically, EEG time-series data is segmented for data augmentation or to mimic buffering that happens in an online BCI. A multi-segment decision fusion approach is presented, which takes consecutive temporal segments of EEG data, and uses decision fusion to boost classification performance. It was computationally lightweight and improved the performance of four conventional classifiers. Also, an analysis of the contributions of electrodes from different scalp regions is presented, and a subset of channels is recommended. Sparse learning (SL) classifiers have exhibited strong classification performance in the literature. However, they are computationally expensive. To reduce the test-set execution times, a novel EEG classification pipeline consisting of a genetic-algorithm (GA) for channel selection and a dictionary-based SL module for classification, called GABSLEEG, is presented. Subject-specific channel selection was carried out, in which the channels are selected based on training data from the subject. Using the GA-recommended subset of EEG channels reduced the execution time by 60% whilst preserving classification performance. Although subject-specific channel selection is widely used in the literature, effective subject-independent channel selection, in which channels are detected using data from other subjects, is an ideal aim because it leads to lower training latency and reduces the number of electrodes needed. A novel convolutional neural network (CNN)-based subject-independent channels selection method is presented, called the integrated channel selection (ICS) layer. It performed on-a-par with or better than subject-specific channel selection. It was computationally efficient, operating 12-17 times faster than the GA channel selection module. The ICS layer method was versatile, performing well with two different CNN architectures and datasets.Developing effective signal processing for brain-computer interfaces (BCIs) and brain-machine interfaces (BMIs) involves factoring in three aspects of functionality: classification performance, execution time, and the number of data channels used. The contributions in this thesis are centered on these three issues. Contributions are focused on the classification of motor imagery (MI) data, which is generated during imagined movements. Typically, EEG time-series data is segmented for data augmentation or to mimic buffering that happens in an online BCI. A multi-segment decision fusion approach is presented, which takes consecutive temporal segments of EEG data, and uses decision fusion to boost classification performance. It was computationally lightweight and improved the performance of four conventional classifiers. Also, an analysis of the contributions of electrodes from different scalp regions is presented, and a subset of channels is recommended. Sparse learning (SL) classifiers have exhibited strong classification performance in the literature. However, they are computationally expensive. To reduce the test-set execution times, a novel EEG classification pipeline consisting of a genetic-algorithm (GA) for channel selection and a dictionary-based SL module for classification, called GABSLEEG, is presented. Subject-specific channel selection was carried out, in which the channels are selected based on training data from the subject. Using the GA-recommended subset of EEG channels reduced the execution time by 60% whilst preserving classification performance. Although subject-specific channel selection is widely used in the literature, effective subject-independent channel selection, in which channels are detected using data from other subjects, is an ideal aim because it leads to lower training latency and reduces the number of electrodes needed. A novel convolutional neural network (CNN)-based subject-independent channels selection method is presented, called the integrated channel selection (ICS) layer. It performed on-a-par with or better than subject-specific channel selection. It was computationally efficient, operating 12-17 times faster than the GA channel selection module. The ICS layer method was versatile, performing well with two different CNN architectures and datasets

    A Hybrid Brain-Computer Interface Using Motor Imagery and SSVEP Based on Convolutional Neural Network

    Full text link
    The key to electroencephalography (EEG)-based brain-computer interface (BCI) lies in neural decoding, and its accuracy can be improved by using hybrid BCI paradigms, that is, fusing multiple paradigms. However, hybrid BCIs usually require separate processing processes for EEG signals in each paradigm, which greatly reduces the efficiency of EEG feature extraction and the generalizability of the model. Here, we propose a two-stream convolutional neural network (TSCNN) based hybrid brain-computer interface. It combines steady-state visual evoked potential (SSVEP) and motor imagery (MI) paradigms. TSCNN automatically learns to extract EEG features in the two paradigms in the training process, and improves the decoding accuracy by 25.4% compared with the MI mode, and 2.6% compared with SSVEP mode in the test data. Moreover, the versatility of TSCNN is verified as it provides considerable performance in both single-mode (70.2% for MI, 93.0% for SSVEP) and hybrid-mode scenarios (95.6% for MI-SSVEP hybrid). Our work will facilitate the real-world applications of EEG-based BCI systems

    A Novel Convolutional Neural Network Classification Approach of Motor-Imagery EEG Recording Based on Deep Learning

    Get PDF
    Abstract: Recently, Electroencephalography (EEG) motor imagery (MI) signals have received increasing attention because it became possible to use these signals to encode a person’s intention to perform an action. Researchers have used MI signals to help people with partial or total paralysis, control devices such as exoskeletons, wheelchairs, prostheses, and even independent driving. Therefore, classifying the motor imagery tasks of these signals is important for a Brain-Computer Interface (BCI) system. Classifying the MI tasks from EEG signals is difficult to offer a good decoder due to the dynamic nature of the signal, its low signal-to-noise ratio, complexity, and dependence on the sensor positions. In this paper, we investigate five multilayer methods for classifying MI tasks: proposed methods based on Artificial Neural Network, Convolutional Neural Network 1 (CNN1), CNN2, CNN1 with CNN2 merged, and the modified CNN1 with CNN2 merged. These proposed methods use different spatial and temporal characteristics extracted from raw EEG data. We demonstrate that our proposed CNN1-based method outperforms state-of-the-art machine/deep learning techniques for EEG classification by an accuracy value of 68.77% and use spatial and frequency characteristics on the BCI Competition IV-2a dataset, which includes nine subjects performing four MI tasks (left/right hand, feet, and tongue). The experimental results demonstrate the feasibility of this proposed method for the classification of MI-EEG signals and can be applied successfully to BCI systems where the amount of data is large due to daily recording

    EEG Analysis Method to Detect Unspoken Answers to Questions Using MSNNs

    Get PDF
    Brain–computer interfaces (BCI) facilitate communication between the human brain and computational systems, additionally offering mechanisms for environmental control to enhance human life. The current study focused on the application of BCI for communication support, especially in detecting unspoken answers to questions. Utilizing a multistage neural network (MSNN) replete with convolutional and pooling layers, the proposed method comprises a threefold approach: electroencephalogram (EEG) measurements, EEG feature extraction, and answer classification. The EEG signals of the participants are captured as they mentally respond with “yes” or “no” to the posed questions. Feature extraction was achieved through an MSNN composed of three distinct convolutional neural network models. The first model discriminates between the EEG signals with and without discernible noise artifacts, whereas the subsequent two models are designated for feature extraction from EEG signals with or without such noise artifacts. Furthermore, a support vector machine is employed to classify the answers to the questions. The proposed method was validated via experiments using authentic EEG data. The mean and standard deviation values for sensitivity and precision of the proposed method were 99.6% and 0.2%, respectively. These findings demonstrate the viability of attaining high accuracy in a BCI by preliminarily segregating the EEG signals based on the presence or absence of artifact noise and underscore the stability of such classification. Thus, the proposed method manifests prospective advantages of separating EEG signals characterized by noise artifacts for enhanced BCI performance

    GCNs-Net: A Graph Convolutional Neural Network Approach for Decoding Time-resolved EEG Motor Imagery Signals

    Full text link
    Towards developing effective and efficient brain-computer interface (BCI) systems, precise decoding of brain activity measured by electroencephalogram (EEG), is highly demanded. Traditional works classify EEG signals without considering the topological relationship among electrodes. However, neuroscience research has increasingly emphasized network patterns of brain dynamics. Thus, the Euclidean structure of electrodes might not adequately reflect the interaction between signals. To fill the gap, a novel deep learning framework based on the graph convolutional neural networks (GCNs) was presented to enhance the decoding performance of raw EEG signals during different types of motor imagery (MI) tasks while cooperating with the functional topological relationship of electrodes. Based on the absolute Pearson's matrix of overall signals, the graph Laplacian of EEG electrodes was built up. The GCNs-Net constructed by graph convolutional layers learns the generalized features. The followed pooling layers reduce dimensionality, and the fully-connected softmax layer derives the final prediction. The introduced approach has been shown to converge for both personalized and group-wise predictions. It has achieved the highest averaged accuracy, 93.056% and 88.57% (PhysioNet Dataset), 96.24% and 80.89% (High Gamma Dataset), at the subject and group level, respectively, compared with existing studies, which suggests adaptability and robustness to individual variability. Moreover, the performance was stably reproducible among repetitive experiments for cross-validation. To conclude, the GCNs-Net filters EEG signals based on the functional topological relationship, which manages to decode relevant features for brain motor imagery
    • …
    corecore