772 research outputs found

    A Convolutional Spiking Network for Gesture Recognition in Brain-Computer Interfaces

    Full text link
    Brain-computer interfaces are being explored for a wide variety of therapeutic applications. Typically, this involves measuring and analyzing continuous-time electrical brain activity via techniques such as electrocorticogram (ECoG) or electroencephalography (EEG) to drive external devices. However, due to the inherent noise and variability in the measurements, the analysis of these signals is challenging and requires offline processing with significant computational resources. In this paper, we propose a simple yet efficient machine learning-based approach for the exemplary problem of hand gesture classification based on brain signals. We use a hybrid machine learning approach that uses a convolutional spiking neural network employing a bio-inspired event-driven synaptic plasticity rule for unsupervised feature learning of the measured analog signals encoded in the spike domain. We demonstrate that this approach generalizes to different subjects with both EEG and ECoG data and achieves superior accuracy in the range of 92.74-97.07% in identifying different hand gesture classes and motor imagery tasks

    An Approach of One-vs-Rest Filter Bank Common Spatial Pattern and Spiking Neural Networks for Multiple Motor Imagery Decoding

    Get PDF
    Motor imagery (MI) is a typical BCI paradigm and has been widely applied into many aspects (e.g. brain-driven wheelchair and motor function rehabilitation training). Although significant achievements have been achieved, multiple motor imagery decoding is still unsatisfactory. To deal with this challenging issue, firstly, a segment of electroencephalogram was extracted and preprocessed. Secondly, we applied a filter bank common spatial pattern (FBCSP) with one-vs-rest (OVR) strategy to extract the spatio-temporal-frequency features of multiple MI. Thirdly, the F-score was employed to optimise and select these features. Finally, the optimized features were fed to the spiking neural networks (SNN) for classification. Evaluation was conducted on two public multiple MI datasets (Dataset IIIa of the BCI competition III and Dataset IIa of the BCI competition IV). Experimental results showed that the average accuracy of the proposed framework reached up to 90.09% (kappa: 0.868) and 81.33% (kappa: 0.751) on the two public datasets, respectively. The achieved performance (accuracy and kappa) was comparable to the best one of the compared methods. This study demonstrated that the proposed method can be used as an alternative approach for multiple MI decoding and it provided a potential solution for online multiple MI detection

    Spiking Neural Network for Augmenting Electroencephalographic Data for Brain Computer Interfaces

    Full text link
    With the advent of advanced machine learning methods, the performance of brain-computer interfaces (BCIs) has improved unprecedentedly. However, electroencephalography (EEG), a commonly used brain imaging method for BCI, is characterized by a tedious experimental setup, frequent data loss due to artifacts, and is time consuming for bulk trial recordings to take advantage of the capabilities of deep learning classifiers. Some studies have tried to address this issue by generating artificial EEG signals. However, a few of these methods are limited in retaining the prominent features or biomarker of the signal. And, other deep learning-based generative methods require a huge number of samples for training, and a majority of these models can handle data augmentation of one category or class of data at any training session. Therefore, there exists a necessity for a generative model that can generate synthetic EEG samples with as few available trials as possible and generate multi-class while retaining the biomarker of the signal. Since EEG signal represents an accumulation of action potentials from neuronal populations beneath the scalp surface and as spiking neural network (SNN), a biologically closer artificial neural network, communicates via spiking behavior, we propose an SNN-based approach using surrogate-gradient descent learning to reconstruct and generate multi-class artificial EEG signals from just a few original samples. The network was employed for augmenting motor imagery (MI) and steady-state visually evoked potential (SSVEP) data. These artificial data are further validated through classification and correlation metrics to assess its resemblance with original data and in-turn enhanced the MI classification performance

    Review of medical data analysis based on spiking neural networks

    Full text link
    Medical data mainly includes various types of biomedical signals and medical images, which can be used by professional doctors to make judgments on patients' health conditions. However, the interpretation of medical data requires a lot of human cost and there may be misjudgments, so many scholars use neural networks and deep learning to classify and study medical data, which can improve the efficiency and accuracy of doctors and detect diseases early for early diagnosis, etc. Therefore, it has a wide range of application prospects. However, traditional neural networks have disadvantages such as high energy consumption and high latency (slow computation speed). This paper presents recent research on signal classification and disease diagnosis based on a third-generation neural network, the spiking neuron network, using medical data including EEG signals, ECG signals, EMG signals and MRI images. The advantages and disadvantages of pulsed neural networks compared with traditional networks are summarized and its development orientation in the future is prospected

    Exploring Two Novel Features for EEG-based Brain-Computer Interfaces: Multifractal Cumulants and Predictive Complexity

    Get PDF
    In this paper, we introduce two new features for the design of electroencephalography (EEG) based Brain-Computer Interfaces (BCI): one feature based on multifractal cumulants, and one feature based on the predictive complexity of the EEG time series. The multifractal cumulants feature measures the signal regularity, while the predictive complexity measures the difficulty to predict the future of the signal based on its past, hence a degree of how complex it is. We have conducted an evaluation of the performance of these two novel features on EEG data corresponding to motor-imagery. We also compared them to the most successful features used in the BCI field, namely the Band-Power features. We evaluated these three kinds of features and their combinations on EEG signals from 13 subjects. Results obtained show that our novel features can lead to BCI designs with improved classification performance, notably when using and combining the three kinds of feature (band-power, multifractal cumulants, predictive complexity) together.Comment: Updated with more subjects. Separated out the band-power comparisons in a companion article after reviewer feedback. Source code and companion article are available at http://nicolas.brodu.numerimoire.net/en/recherche/publication

    Online multiclass EEG feature extraction and recognition using modified convolutional neural network method

    Get PDF
    Many techniques have been introduced to improve both brain-computer interface (BCI) steps: feature extraction and classification. One of the emerging trends in this field is the implementation of deep learning algorithms. There is a limited number of studies that investigated the application of deep learning techniques in electroencephalography (EEG) feature extraction and classification. This work is intended to apply deep learning for both stages: feature extraction and classification. This paper proposes a modified convolutional neural network (CNN) feature extractorclassifier algorithm to recognize four different EEG motor imagery (MI). In addition, a four-class linear discriminant analysis (LDR) classifier model was built and compared to the proposed CNN model. The paper showed very good results with 92.8% accuracy for one EEG four-class MI set and 85.7% for another set. The results showed that the proposed CNN model outperforms multi-class linear discriminant analysis with an accuracy increase of 28.6% and 17.9% for both MI sets, respectively. Moreover, it has been shown that majority voting for five repetitions introduced an accuracy advantage of 15% and 17.2% for both EEG sets, compared with single trials. This confirms that increasing the number of trials for the same MI gesture improves the recognition accurac

    Unified Framework for Identity and Imagined Action Recognition from EEG patterns

    Full text link
    We present a unified deep learning framework for the recognition of user identity and the recognition of imagined actions, based on electroencephalography (EEG) signals, for application as a brain-computer interface. Our solution exploits a novel shifted subsampling preprocessing step as a form of data augmentation, and a matrix representation to encode the inherent local spatial relationships of multi-electrode EEG signals. The resulting image-like data is then fed to a convolutional neural network to process the local spatial dependencies, and eventually analyzed through a bidirectional long-short term memory module to focus on temporal relationships. Our solution is compared against several methods in the state of the art, showing comparable or superior performance on different tasks. Specifically, we achieve accuracy levels above 90% both for action and user classification tasks. In terms of user identification, we reach 0.39% equal error rate in the case of known users and gestures, and 6.16% in the more challenging case of unknown users and gestures. Preliminary experiments are also conducted in order to direct future works towards everyday applications relying on a reduced set of EEG electrodes
    corecore