1,025 research outputs found

    EEG-based emotion classification using spiking neural networks

    Get PDF
    A novel method of using the spiking neural networks (SNNs) and the electroencephalograph (EEG) processing techniques to recognize emotion states is proposed in this paper. Three algorithms including discrete wavelet transform (DWT), variance and fast Fourier transform (FFT) are employed to extract the EEG signals, which are further taken by the SNN for the emotion classification. Two datasets, i.e., DEAP and SEED, are used to validate the proposed method. For the former dataset, the emotional states include arousal, valence, dominance and liking where each state is denoted as either high or low status. For the latter dataset, the emotional states are divided into three categories (negative, positive and neutral). Experimental results show that by using the variance data processing technique and SNN, the emotion states of arousal, valence, dominance and liking can be classified with accuracies of 74%, 78%, 80% and 86.27% for the DEAP dataset, and an overall accuracy is 96.67% for the SEED dataset, which outperform the FFT and DWT processing methods. In the meantime, this work achieves a better emotion classification performance than the benchmarking approaches, and also demonstrates the advantages of using SNN for the emotion state classifications

    Review of medical data analysis based on spiking neural networks

    Full text link
    Medical data mainly includes various types of biomedical signals and medical images, which can be used by professional doctors to make judgments on patients' health conditions. However, the interpretation of medical data requires a lot of human cost and there may be misjudgments, so many scholars use neural networks and deep learning to classify and study medical data, which can improve the efficiency and accuracy of doctors and detect diseases early for early diagnosis, etc. Therefore, it has a wide range of application prospects. However, traditional neural networks have disadvantages such as high energy consumption and high latency (slow computation speed). This paper presents recent research on signal classification and disease diagnosis based on a third-generation neural network, the spiking neuron network, using medical data including EEG signals, ECG signals, EMG signals and MRI images. The advantages and disadvantages of pulsed neural networks compared with traditional networks are summarized and its development orientation in the future is prospected

    FusionSense: Emotion Classification using Feature Fusion of Multimodal Data and Deep learning in a Brain-inspired Spiking Neural Network

    Get PDF
    Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem.Peer reviewe

    Network perspectives on epilepsy using EEG/MEG source connectivity

    Get PDF
    The evolution of EEG/MEG source connectivity is both, a promising, and controversial advance in the characterization of epileptic brain activity. In this narrative review we elucidate the potential of this technology to provide an intuitive view of the epileptic network at its origin, the different brain regions involved in the epilepsy, without the limitation of electrodes at the scalp level. Several studies have confirmed the added value of using source connectivity to localize the seizure onset zone and irritative zone or to quantify the propagation of epileptic activity over time. It has been shown in pilot studies that source connectivity has the potential to obtain prognostic correlates, to assist in the diagnosis of the epilepsy type even in the absence of visually noticeable epileptic activity in the EEG/MEG, and to predict treatment outcome. Nevertheless, prospective validation studies in large and heterogeneous patient cohorts are still lacking and are needed to bring these techniques into clinical use. Moreover, the methodological approach is challenging, with several poorly examined parameters that most likely impact the resulting network patterns. These fundamental challenges affect all potential applications of EEG/MEG source connectivity analysis, be it in a resting, spiking, or ictal state, and also its application to cognitive activation of the eloquent area in presurgical evaluation. However, such method can allow unique insights into physiological and pathological brain functions and have great potential in (clinical) neuroscience

    Convolutional Spiking Neural Networks for Detecting Anticipatory Brain Potentials Using Electroencephalogram

    Full text link
    Spiking neural networks (SNNs) are receiving increased attention as a means to develop "biologically plausible" machine learning models. These networks mimic synaptic connections in the human brain and produce spike trains, which can be approximated by binary values, precluding high computational cost with floating-point arithmetic circuits. Recently, the addition of convolutional layers to combine the feature extraction power of convolutional networks with the computational efficiency of SNNs has been introduced. In this paper, the feasibility of using a convolutional spiking neural network (CSNN) as a classifier to detect anticipatory slow cortical potentials related to braking intention in human participants using an electroencephalogram (EEG) was studied. The EEG data was collected during an experiment wherein participants operated a remote controlled vehicle on a testbed designed to simulate an urban environment. Participants were alerted to an incoming braking event via an audio countdown to elicit anticipatory potentials that were then measured using an EEG. The CSNN's performance was compared to a standard convolutional neural network (CNN) and three graph neural networks (GNNs) via 10-fold cross-validation. The results showed that the CSNN outperformed the other neural networks.Comment: 14 pages, 6 figures, Scientific Reports submissio

    Design of MRI Structured Spiking Neural Networks and Learning Algorithms for Personalized Modelling, Analysis, and Prediction of EEG Signals

    Get PDF
    Abstract This paper proposes a novel method and algorithms for the design of MRI structured personalized 3D spiking neural network models (MRI-SNN) for a better analysis, modeling, and prediction of EEG signals. It proposes a novel gradient-descent learning algorithm integrated with a spike-time-dependent-plasticity algorithm. The models capture informative personal patterns of interaction between EEG channels, contrary to single EEG signal modeling methods or to spike-based approaches which do not use personal MRI data to pre-structure a model. The proposed models can not only learn and model accurately measured EEG data, but they can also predict signals at 3D model locations that correspond to non-monitored brain areas, e.g. other EEG channels, from where data has not been collected. This is the first study in this respect. As an illustration of the method, personalized MRI-SNN models are created and tested on EEG data from two subjects. The models result in better prediction accuracy and a better understanding of the personalized EEG signals than traditional methods due to the MRI and EEG information integration. The models are interpretable and facilitate a better understanding of related brain processes. This approach can be applied for personalized modeling, analysis, and prediction of EEG signals across brain studies such as the study and prediction of epilepsy, peri-perceptual brain activities, brain-computer interfaces, and others

    Synch-Graph : multisensory emotion recognition through neural synchrony via graph convolutional networks

    Get PDF
    Human emotions are essentially multisensory, where emotional states are conveyed through multiple modalities such as facial expression, body language, and non-verbal and verbal signals. Therefore having multimodal or multisensory learning is crucial for recognising emotions and interpreting social signals. Existing multisensory emotion recognition approaches focus on extracting features on each modality, while ignoring the importance of constant interaction and co- learning between modalities. In this paper, we present a novel bio-inspired approach based on neural synchrony in audio- visual multisensory integration in the brain, named Synch-Graph. We model multisensory interaction using spiking neural networks (SNN) and explore the use of Graph Convolutional Networks (GCN) to represent and learn neural synchrony patterns. We hypothesise that modelling interactions between modalities will improve the accuracy of emotion recognition. We have evaluated Synch-Graph on two state- of-the-art datasets and achieved an overall accuracy of 98.3% and 96.82%, which are significantly higher than the existing techniques.Postprin

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments
    • 

    corecore