10 research outputs found
Automatic Identification of Epileptic Seizures from EEG Signals using Sparse Representation-based Classification
Identifying seizure activities in non-stationary electroencephalography (EEG)
is a challenging task, since it is time-consuming, burdensome, and dependent on
expensive human resources and subject to error and bias. A computerized seizure
identification scheme can eradicate the above problems, assist clinicians and
benefit epilepsy research. So far, several attempts were made to develop
automatic systems to help neurophysiologists accurately identify epileptic
seizures. In this research, a fully automated system is presented to
automatically detect the various states of the epileptic seizure. The proposed
method is based on sparse representation-based classification (SRC) theory and
the proposed dictionary learning using electroencephalogram (EEG) signals.
Furthermore, the proposed method does not require additional preprocessing and
extraction of features which is common in the existing methods. The proposed
method reached the sensitivity, specificity and accuracy of 100% in 8 out of 9
scenarios. It is also robust to the measurement noise of level as much as 0 dB.
Compared to state-of-the-art algorithms and other common methods, the proposed
method outperformed them in terms of sensitivity, specificity and accuracy.
Moreover, it includes the most comprehensive scenarios for epileptic seizure
detection, including different combinations of 2 to 5 class scenarios. The
proposed automatic identification of epileptic seizures method can reduce the
burden on medical professionals in analyzing large data through visual
inspection as well as in deprived societies suffering from a shortage of
functional magnetic resonance imaging (fMRI) equipment and specialized
physician
Recommended from our members
Customized 2D CNN Model for the Automatic Emotion Recognition Based on EEG Signals
Data Availability Statement: The data related to this article is publicly available on the GitHub platform under the title Baradaran emotion dataset.Copyright © 2023 by the authors. Automatic emotion recognition from electroencephalogram (EEG) signals can be considered as the main component of brain–computer interface (BCI) systems. In the previous years, many researchers in this direction have presented various algorithms for the automatic classification of emotions from EEG signals, and they have achieved promising results; however, lack of stability, high error, and low accuracy are still considered as the central gaps in this research. For this purpose, obtaining a model with the precondition of stability, high accuracy, and low error is considered essential for the automatic classification of emotions. In this research, a model based on Deep Convolutional Neural Networks (DCNNs) is presented, which can classify three positive, negative, and neutral emotions from EEG signals based on musical stimuli with high reliability. For this purpose, a comprehensive database of EEG signals has been collected while volunteers were listening to positive and negative music in order to stimulate the emotional state. The architecture of the proposed model consists of a combination of six convolutional layers and two fully connected layers. In this research, different feature learning and hand-crafted feature selection/extraction algorithms were investigated and compared with each other in order to classify emotions. The proposed model for the classification of two classes (positive and negative) and three classes (positive, neutral, and negative) of emotions had 98% and 96% accuracy, respectively, which is very promising compared with the results of previous research. In order to evaluate more fully, the proposed model was also investigated in noisy environments; with a wide range of different SNRs, the classification accuracy was still greater than 90%. Due to the high performance of the proposed model, it can be used in brain–computer user environments.This research received no external funding
Recommended from our members
Deep Learning for Detecting Multi-Level Driver Fatigue Using Physiological Signals: A Comprehensive Approach
Data Availability Statement: Tabriz University’s ethics committee in Tabriz, Iran. Data access is private and not publicly available.Copyright © 2023 by the authors. A large share of traffic accidents is related to driver fatigue. In recent years, many studies have been organized in order to diagnose and warn drivers. In this research, a new approach was presented in order to detect multi-level driver fatigue. A multi-level driver tiredness diagnostic database based on physiological signals including ECG, EEG, EMG, and respiratory effort was developed for this aim. The EEG signal was used for processing and other recorded signals were used to confirm the driver’s fatigue so that fatigue was not confirmed based on self-report questionnaires. A customized architecture based on adversarial generative networks and convolutional neural networks (end-to-end) was utilized to select/extract features and classify different levels of fatigue. In the customized architecture, with the objective of eliminating uncertainty, type 2 fuzzy sets were used instead of activation functions such as Relu and Leaky Relu, and the performance of each was investigated. The final accuracy obtained in the three scenarios considered, two-level, three-level, and five-level, were 96.8%, 95.1%, and 89.1%, respectively. Given the suggested model’s optimal performance, which can identify five various levels of driver fatigue with high accuracy, it can be employed in practical applications of driver fatigue to warn drivers.This research received no external funding
Recommended from our members
Salient Arithmetic Data Extraction from Brain Activity via an Improved Deep Network
Data Availability Statement:
The EEG dataset is available online at https://mindbigdata.com/opendb/ (Accessed on 12 February 2020).Interpretation of neural activity in response to stimulations received from the surrounding environment is necessary to realize automatic brain decoding. Analyzing the brain recordings corresponding to visual stimulation helps to infer the effects of perception occurring by vision on brain activity. In this paper, the impact of arithmetic concepts on vision-related brain records has been considered and an efficient convolutional neural network-based generative adversarial network (CNN-GAN) is proposed to map the electroencephalogram (EEG) to salient parts of the image stimuli. The first part of the proposed network consists of depth-wise one-dimensional convolution layers to classify the brain signals into 10 different categories according to Modified National Institute of Standards and Technology (MNIST) image digits. The output of the CNN part is fed forward to a fine-tuned GAN in the proposed model. The performance of the proposed CNN part is evaluated via the visually provoked 14-channel MindBigData recorded by David Vivancos, corresponding to images of 10 digits. An average accuracy of 95.4% is obtained for the CNN part for classification. The performance of the proposed CNN-GAN is evaluated based on saliency metrics of SSIM and CC equal to 92.9% and 97.28%, respectively. Furthermore, the EEG-based reconstruction of MNIST digits is accomplished by transferring and tuning the improved CNN-GAN’s trained weights.This research received no external funding
Recommended from our members
Visual Saliency and Image Reconstruction from EEG Signals via an Effective Geometric Deep Network-Based Generative Adversarial Network
Data Availability Statement: The EEG-ImageNet dataset used in this study is publicly available in this address: https://tinyurl.com/eeg-visual-classification (accessed on 10 October 2022).Copyright © 2022 by the authors. Reaching out the function of the brain in perceiving input data from the outside world is one of the great targets of neuroscience. Neural decoding helps us to model the connection between brain activities and the visual stimulation. The reconstruction of images from brain activity can be achieved through this modelling. Recent studies have shown that brain activity is impressed by visual saliency, the important parts of an image stimuli. In this paper, a deep model is proposed to reconstruct the image stimuli from electroencephalogram (EEG) recordings via visual saliency. To this end, the proposed geometric deep network-based generative adversarial network (GDN-GAN) is trained to map the EEG signals to the visual saliency maps corresponding to each image. The first part of the proposed GDN-GAN consists of Chebyshev graph convolutional layers. The input of the GDN part of the proposed network is the functional connectivity-based graph representation of the EEG channels. The output of the GDN is imposed to the GAN part of the proposed network to reconstruct the image saliency. The proposed GDN-GAN is trained using the Google Colaboratory Pro platform. The saliency metrics validate the viability and efficiency of the proposed saliency reconstruction network. The weights of the trained network are used as initial weights to reconstruct the grayscale image stimuli. The proposed network realizes the image reconstruction from EEG signals.This research received no external funding
Recommended from our members
A Novel Approach for Automatic Detection of Driver Fatigue Using EEG Signals Based on Graph Convolutional Networks
Data Availability Statement: In this research, experimental data were not recorded.Copyright © 2024 by the authors. Nowadays, the automatic detection of driver fatigue has become one of the important measures to prevent traffic accidents. For this purpose, a lot of research has been conducted in this field in recent years. However, the diagnosis of fatigue in recent research is binary and has no operational capability. This research presents a multi-class driver fatigue detection system based on electroencephalography (EEG) signals using deep learning networks. In the proposed system, a standard driving simulator has been designed, and a database has been collected based on the recording of EEG signals from 20 participants in five different classes of fatigue. In addition to self-report questionnaires, changes in physiological patterns are used to confirm the various stages of weariness in the suggested model. To pre-process and process the signal, a combination of generative adversarial networks (GAN) and graph convolutional networks (GCN) has been used. The proposed deep model includes five convolutional graph layers, one dense layer, and one fully connected layer. The accuracy obtained for the proposed model is 99%, 97%, 96%, and 91%, respectively, for the four different considered practical cases. The proposed model is compared to one developed through recent methods and research and has a promising performance.This research received no external funding
Recommended from our members
EEG-based functional connectivity analysis of brain abnormalities: A systematic review study
Several imaging modalities and many signal recording techniques have been used to study the brain activities. Significant advancements in medical device technologies like electroencephalographs have provided conditions for recording neural information with high temporal resolution. These recordings can be used to calculate the connections between different brain areas. It has been proved that brain abnormalities affect the brain activity in different brain regions and the connectivity patterns between them are changed as a result. This paper studies the electroencephalogram (EEG) functional connectivity methods and investigates the impacts of brain abnormalities on brain functional connectivities. The effects of different brain abnormalities including stroke, depression, emotional disorders, epilepsy, attention deficit hyperactivity disorder (ADHD), autism, and Alzheimer's disease on functional connectivity of the EEG recordings have been explored in this study. The EEG-based metrics and network properties of different brain abnormalities have been discussed to present a comparison of the connectivities affected by each abnormality. Also, the effects of therapy and medical intake on the EEG functional connectivity network of each abnormality have been reviewed.This research received no external funding
Recommended from our members
Automatic Emotion Recognition from EEG Signals Using a Combination of Type-2 Fuzzy and Deep Convolutional Networks
.....
Recommended from our members
Automatically Identified EEG Signals of Movement Intention Based on CNN Network (End-To-End)
Data Availability Statement: The data is private due to the lack of permission from the ethics committee.Copyright © 2022 by the authors. Movement-based brain–computer Interfaces (BCI) rely significantly on the automatic identification of movement intent. They also allow patients with motor disorders to communicate with external devices. The extraction and selection of discriminative characteristics, which often boosts computer complexity, is one of the issues with automatically discovered movement intentions. This research introduces a novel method for automatically categorizing two-class and three-class movement-intention situations utilizing EEG data. In the suggested technique, the raw EEG input is applied directly to a convolutional neural network (CNN) without feature extraction or selection. According to previous research, this is a complex approach. Ten convolutional layers are included in the suggested network design, followed by two fully connected layers. The suggested approach could be employed in BCI applications due to its high accuracy.This research received no external funding
Automatic Detection of Driver Fatigue Based on EEG Signals Using a Developed Deep Neural Network
In recent years, detecting driver fatigue has been a significant practical necessity and issue. Even though several investigations have been undertaken to examine driver fatigue, there are relatively few standard datasets on identifying driver fatigue. For earlier investigations, conventional methods relying on manual characteristics were utilized to assess driver fatigue. In any case study, such approaches need previous information for feature extraction, which could raise computing complexity. The current work proposes a driver fatigue detection system, which is a fundamental necessity to minimize road accidents. Data from 11 people are gathered for this purpose, resulting in a comprehensive dataset. The dataset is prepared in accordance with previously published criteria. A deep convolutional neural network–long short-time memory (CNN–LSTM) network is conceived and evolved to extract characteristics from raw EEG data corresponding to the six active areas A, B, C, D, E (based on a single channel), and F. The study’s findings reveal that the suggested deep CNN–LSTM network could learn features hierarchically from raw EEG data and attain a greater precision rate than previous comparative approaches for two-stage driver fatigue categorization. The suggested approach may be utilized to construct automatic fatigue detection systems because of their precision and high speed