1,690 research outputs found

    EEG Classification based on Image Configuration in Social Anxiety Disorder

    Get PDF
    The problem of detecting the presence of Social Anxiety Disorder (SAD) using Electroencephalography (EEG) for classification has seen limited study and is addressed with a new approach that seeks to exploit the knowledge of EEG sensor spatial configuration. Two classification models, one which ignores the configuration (model 1) and one that exploits it with different interpolation methods (model 2), are studied. Performance of these two models is examined for analyzing 34 EEG data channels each consisting of five frequency bands and further decomposed with a filter bank. The data are collected from 64 subjects consisting of healthy controls and patients with SAD. Validity of our hypothesis that model 2 will significantly outperform model 1 is borne out in the results, with accuracy 66--7%7\% higher for model 2 for each machine learning algorithm we investigated. Convolutional Neural Networks (CNN) were found to provide much better performance than SVM and kNNs

    Vowel Imagery Decoding toward Silent Speech BCI Using Extreme Learning Machine with Electroencephalogram

    Get PDF

    Diverse Feature Blend Based on Filter-Bank Common Spatial Pattern and Brain Functional Connectivity for Multiple Motor Imagery Detection

    Get PDF
    Motor imagery (MI) based brain-computer interface (BCI) is a research hotspot and has attracted lots of attention. Within this research topic, multiple MI classification is a challenge due to the difficulties caused by time-varying spatial features across different individuals. To deal with this challenge, we tried to fuse brain functional connectivity (BFC) and one-versus-the-rest filter-bank common spatial pattern (OVR-FBCSP) to improve the robustness of classification. The BFC features were extracted by phase locking value (PLV), representing the brain inter-regional interactions relevant to the MI, whilst the OVR-FBCSP is used to extract the spatial-frequency features related to the MI. These diverse features were then fed into a multi-kernel relevance vector machine (MK-RVM). The dataset with three motor imagery tasks (left hand MI, right hand MI, and feet MI) was used to assess the proposed method. Experimental results not only showed that the cascade structure of diverse feature fusion and MK-RVM achieved satisfactory classification performance (average accuracy: 83.81%, average kappa: 0.76), but also demonstrated that BFC plays a supplementary role in the MI classification. Moreover, the proposed method has a potential to be integrated into multiple MI online detection owing to the advantage of strong time-efficiency of RVM

    EEG-Based User Reaction Time Estimation Using Riemannian Geometry Features

    Full text link
    Riemannian geometry has been successfully used in many brain-computer interface (BCI) classification problems and demonstrated superior performance. In this paper, for the first time, it is applied to BCI regression problems, an important category of BCI applications. More specifically, we propose a new feature extraction approach for Electroencephalogram (EEG) based BCI regression problems: a spatial filter is first used to increase the signal quality of the EEG trials and also to reduce the dimensionality of the covariance matrices, and then Riemannian tangent space features are extracted. We validate the performance of the proposed approach in reaction time estimation from EEG signals measured in a large-scale sustained-attention psychomotor vigilance task, and show that compared with the traditional powerband features, the tangent space features can reduce the root mean square estimation error by 4.30-8.30%, and increase the estimation correlation coefficient by 6.59-11.13%.Comment: arXiv admin note: text overlap with arXiv:1702.0291

    Fully portable and wireless universal brain-machine interfaces enabled by flexible scalp electronics and deep-learning algorithm

    Get PDF
    Variation in human brains creates difficulty in implementing electroencephalography (EEG) into universal brain-machine interfaces (BMI). Conventional EEG systems typically suffer from motion artifacts, extensive preparation time, and bulky equipment, while existing EEG classification methods require training on a per-subject or per-session basis. Here, we introduce a fully portable, wireless, flexible scalp electronic system, incorporating a set of dry electrodes and flexible membrane circuit. Time domain analysis using convolutional neural networks allows for an accurate, real-time classification of steady-state visually evoked potentials on the occipital lobe. Simultaneous comparison of EEG signals with two commercial systems captures the improved performance of the flexible electronics with significant reduction of noise and electromagnetic interference. The two-channel scalp electronic system achieves a high information transfer rate (122.1 ± 3.53 bits per minute) with six human subjects, allowing for a wireless, real-time, universal EEG classification for an electronic wheelchair, motorized vehicle, and keyboard-less presentation

    Five-Class SSVEP Response Detection using Common-Spatial Pattern (CSP)-SVM Approach

    Get PDF
    Brain-computer interface (BCI) technologies significantly facilitate the interaction between physically impaired people and their surroundings. In electroencephalography (EEG) based BCIs, a variety of physiological responses including P300, motor imagery, movement-related potential, steady-state visual evoked potential (SSVEP) and slow cortical potential have been utilized. Because of the superior signal-to-noise ratio (SNR) together with quicker information transfer rate (ITR), the intentness of SSVEP-based BCIs is progressing significantly. This paper represents the feature extraction and classification frameworks to detect five classes EEG-SSVEP responses. The common-spatial pattern (CSP) has been employed to extract the features from SSVEP responses and these features have been classified through the support vector machine (SVM). The proposed architecture has achieved the highest classification accuracy of 88.3%. The experimental result proves that the proposed architecture could be utilized for the detection of SSVEP responses to develop any BCI applications

    Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring

    Get PDF
    How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal
    corecore