226 research outputs found

    GCNs-Net: A Graph Convolutional Neural Network Approach for Decoding Time-resolved EEG Motor Imagery Signals

    Full text link
    Towards developing effective and efficient brain-computer interface (BCI) systems, precise decoding of brain activity measured by electroencephalogram (EEG), is highly demanded. Traditional works classify EEG signals without considering the topological relationship among electrodes. However, neuroscience research has increasingly emphasized network patterns of brain dynamics. Thus, the Euclidean structure of electrodes might not adequately reflect the interaction between signals. To fill the gap, a novel deep learning framework based on the graph convolutional neural networks (GCNs) was presented to enhance the decoding performance of raw EEG signals during different types of motor imagery (MI) tasks while cooperating with the functional topological relationship of electrodes. Based on the absolute Pearson's matrix of overall signals, the graph Laplacian of EEG electrodes was built up. The GCNs-Net constructed by graph convolutional layers learns the generalized features. The followed pooling layers reduce dimensionality, and the fully-connected softmax layer derives the final prediction. The introduced approach has been shown to converge for both personalized and group-wise predictions. It has achieved the highest averaged accuracy, 93.056% and 88.57% (PhysioNet Dataset), 96.24% and 80.89% (High Gamma Dataset), at the subject and group level, respectively, compared with existing studies, which suggests adaptability and robustness to individual variability. Moreover, the performance was stably reproducible among repetitive experiments for cross-validation. To conclude, the GCNs-Net filters EEG signals based on the functional topological relationship, which manages to decode relevant features for brain motor imagery

    Deep Learning Model With Adaptive Regularization for EEG-Based Emotion Recognition Using Temporal and Frequency Features

    Get PDF
    Since EEG signal acquisition is non-invasive and portable, it is convenient to be used for different applications. Recognizing emotions based on Brain-Computer Interface (BCI) is an important active BCI paradigm for recognizing the inner state of persons. There are extensive studies about emotion recognition, most of which heavily rely on staged complex handcrafted EEG feature extraction and classifier design. In this paper, we propose a hybrid multi-input deep model with convolution neural networks (CNNs) and bidirectional Long Short-term Memory (Bi-LSTM). CNNs extract time-invariant features from raw EEG data, and Bi-LSTM allows long-range lateral interactions between features. First, we propose a novel hybrid multi-input deep learning approach for emotion recognition from raw EEG signals. Second, in the first layers, we use two CNNs with small and large filter sizes to extract temporal and frequency features from each raw EEG epoch of 62-channel 2-s and merge with differential entropy of EEG band. Third, we apply the adaptive regularization method over each parallel CNN’s layer to consider the spatial information of EEG acquisition electrodes. The proposed method is evaluated on two public datasets, SEED and DEAP. Our results show that our technique can significantly improve the accuracy in comparison with the baseline where no adaptive regularization techniques are used

    Graph Attention Based Spatial Temporal Network for EEG Signal Representation

    Get PDF
    Graph attention networks (GATs) based architectures have proved to be powerful at implicitly learning relationships between adjacent nodes in a graph. For electroencephalogram (EEG) signals, however, it is also essential to highlight electrode locations or underlying brain regions which are active when a particular event related potential (ERP) is evoked. Moreover, it is often im-portant to identify corresponding EEG signal time segments within which the ERP is activated. We introduce a GAT Inspired Spatial Temporal (GIST) net-work that uses multilayer GAT as its base for three attention blocks: edge atten-tions, followed by node attention and temporal attention layers, which focus on relevant brain regions and time windows for better EEG signal classification performance, and interpretability. We assess the capability of the architecture by using publicly available Transcranial Electrical Stimulation (TES), neonatal pain (NP) and DREAMER EEG datasets. With these datasets, the model achieves competitive performance. Most importantly, the paper presents atten-tion visualisation and suggests ways of interpreting them for EEG signal under-standing
    • …
    corecore