9 research outputs found
An Automated System for Epilepsy Detection using EEG Brain Signals based on Deep Learning Approach
Epilepsy is a neurological disorder and for its detection, encephalography
(EEG) is a commonly used clinical approach. Manual inspection of EEG brain
signals is a time-consuming and laborious process, which puts heavy burden on
neurologists and affects their performance. Several automatic techniques have
been proposed using traditional approaches to assist neurologists in detecting
binary epilepsy scenarios e.g. seizure vs. non-seizure or normal vs. ictal.
These methods do not perform well when classifying ternary case e.g. ictal vs.
normal vs. inter-ictal; the maximum accuracy for this case by the
state-of-the-art-methods is 97+-1%. To overcome this problem, we propose a
system based on deep learning, which is an ensemble of pyramidal
one-dimensional convolutional neural network (P-1D-CNN) models. In a CNN model,
the bottleneck is the large number of learnable parameters. P-1D-CNN works on
the concept of refinement approach and it results in 60% fewer parameters
compared to traditional CNN models. Further to overcome the limitations of
small amount of data, we proposed augmentation schemes for learning P-1D-CNN
model. In almost all the cases concerning epilepsy detection, the proposed
system gives an accuracy of 99.1+-0.9% on the University of Bonn dataset.Comment: 18 page
Epileptic multi-seizure type classification using electroencephalogram signals from the Temple University Hospital Seizure Corpus:A review
Epilepsy is one of the most paramount neurological diseases, affecting about 1% of the world's population. Seizure detection and classification are difficult tasks and are ongoing challenges in biomedical signal processing to enhance medical diagnosis. This paper presents and highlights the unique frequency and amplitude information found within multiple seizure types, including their morphologies, to aid the development of future seizure classification algorithms. Whilst many published works in the literature have reported on seizure detection using electroencephalogram (EEG), there has yet to be an exhaustive review detailing multi-seizure type classification using EEG. Therefore, this paper also includes a detailed review of multi-seizure type classification performance based on the Temple University Hospital Seizure Corpus (TUSZ) dataset for focal and generalised classification, and multi-seizure type classification. Deep learning techniques have a higher overall average performance for focal and generalised classification compared to machine learning techniques, whereas hybrid deep learning approaches have the highest overall average performance for multi-seizure type classification. Finally, this paper also highlights the limitations of the TUSZ dataset and suggests some future work, including the curation of a standardised training and testing dataset from the TUSZ that would allow a proper comparison of classification methods and spur advancement in the field.</p
Epileptic Seizure Detection Based on EEG Signals and CNN
Epilepsy is a neurological disorder that affects approximately fifty million people according to the World Health Organization. While electroencephalography (EEG) plays important roles in monitoring the brain activity of patients with epilepsy and diagnosing epilepsy, an expert is needed to analyze all EEG recordings to detect epileptic activity. This method is obviously time-consuming and tedious, and a timely and accurate diagnosis of epilepsy is essential to initiate antiepileptic drug therapy and subsequently reduce the risk of future seizures and seizure-related complications. In this study, a convolutional neural network (CNN) based on raw EEG signals instead of manual feature extraction was used to distinguish ictal, preictal, and interictal segments for epileptic seizure detection. We compared the performances of time and frequency domain signals in the detection of epileptic signals based on the intracranial Freiburg and scalp CHB-MIT databases to explore the potential of these parameters. Three types of experiments involving two binary classification problems (interictal vs. preictal and interictal vs. ictal) and one three-class problem (interictal vs. preictal vs. ictal) were conducted to explore the feasibility of this method. Using frequency domain signals in the Freiburg database, average accuracies of 96.7, 95.4, and 92.3% were obtained for the three experiments, while the average accuracies for detection in the CHB-MIT database were 95.6, 97.5, and 93% in the three experiments. Using time domain signals in the Freiburg database, the average accuracies were 91.1, 83.8, and 85.1% in the three experiments, while the signal detection accuracies in the CHB-MIT database were only 59.5, 62.3, and 47.9% in the three experiments. Based on these results, the three cases are effectively detected using frequency domain signals. However, the effective identification of the three cases using time domain signals as input samples is achieved for only some patients. Overall, the classification accuracies of frequency domain signals are significantly increased compared to time domain signals. In addition, frequency domain signals have greater potential than time domain signals for CNN applications
Recommended from our members
Brainwave-Based Human Emotion Estimation using Deep Neural Network Models for Biofeedback
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonEmotion is a state that comprehensively represents human feeling, thought and behavior, thus takes an important role in interpersonal human communication. Emotion estimation aims to automatically discriminate different emotional states by using physiological and nonphysiological signals acquired from human to achieve effective communication and interaction between human and machines. Brainwaves-Based Emotion Estimation is one of the most common used and efficient methods for emotion estimation research. The technology reveals a great role for human emotional disorder treatment, brain computer interface for disabilities, entertainment and many other research areas. In this thesis, various methods, schemes and frameworks are presented for Electroencephalogram (EEG) based human emotion estimation. Firstly, a hybrid dimension featurere duction scheme is presented using a total of 14 different features extracted from EEG recordings. The scheme combines these distinct features in the feature space using both supervised and unsupervised feature selection processes. Maximum Relevance Minimum Redundancy (mRMR) is applied to re-order the combined features into max-relevance with the emotion labels and min-redundancy of each feature. The generated features are further reduced with Principal Component Analysis (PCA) for extracting the principal components. Experimental results show that the proposed work outperforms the state-of-art methods using the same settings at the publicly available Database for Emotional Analysis using Physiological Signals (DEAP) data set. Secondly, a disentangled adaptive noise learning β-Variational autoencoder (VAE) combinewithlongshorttermmemory(LSTM)modelwasproposedfortheemotionrecognition based on EEG recordings. The experiment is also based on the EEG emotion public DEAPdataset. At first, the EEG time-series data are transformed into the Video-like EEG image data through the Azimuthal Equidistant Projection (AEP) to original EEG-sensor 3-D coordinates to perform 2-D projected locations of electrodes. Then Clough-Tocher scheme is applied for interpolating the scattered power measurements over the scalp and for estimating the values in-between the electrodes over a 32x32 mesh. After that, the βVAE LSTM algorithm is used to estimate the accuracy of the quadratic (arousal-valence) classification. The comparison between the β VAE-LSTM model and other classic methods is conducted at the same experimental setting that shows that the proposed model is effective. Finally, a novel real-time emotion detection system based on the EEG signals from a portable headband was presented, integrated into the interactive film ‘RIOT’. At first, the requirement of the interactive film was collected and the protocol for data collection using a portable EEG sensor (Emotiv Epoc) was designed. Then, a portable EEG emotion database (PEED) is built from 10 participants with the emotion labels using both self-reporting and video annotation tools. After that, various feature extraction, feature selection, validation scheme and classification methods are explored to build a practical system for the real-time detection. In the end, the emotion detection system is trained and integrated into the interactive film for real-time implementation and fully evaluated. The experimental results demonstrate the system with satisfied emotion detection accuracy and real-time performance
Intelligent Biosignal Analysis Methods
This book describes recent efforts in improving intelligent systems for automatic biosignal analysis. It focuses on machine learning and deep learning methods used for classification of different organism states and disorders based on biomedical signals such as EEG, ECG, HRV, and others
The 8th International Conference on Time Series and Forecasting
The aim of ITISE 2022 is to create a friendly environment that could lead to the establishment or strengthening of scientific collaborations and exchanges among attendees. Therefore, ITISE 2022 is soliciting high-quality original research papers (including significant works-in-progress) on any aspect time series analysis and forecasting, in order to motivating the generation and use of new knowledge, computational techniques and methods on forecasting in a wide range of fields
OptiWindSeaPower: Gestión Integral Óptima de Parques Eólicos Offshore Mediante Nuevos Modelos Matemáticos (2ª parte)
En el artículo “OptiWindSeaPower: Gestión Integral Óptima de Parques Eólicos Offshore Mediante Nuevos Modelos Matemáticos” [1], revista AEND número 86, se presentaba el sistema de monitorización desarrollado en el laboratorio para analizar la condición de los
principales elementos estructurales de un aerogenerador. Dicho sistema consistía en el empleo de sensores y actuadores de tipo MFC (Macro Fiber Composite).
La inspección se ha realizado mediante la generación y propagación de ondas ultrasónicas de tipo Lamb, y las señales adquiridas poseen información compleja debido a la superposición de los diferentes modos de propagación característicos de estos tipos
de onda, junto con las reflexiones producidas por las discontinuidades del material y defectos.
Se ha demostrado su eficiencia en aplicaciones de
control, aplicaciones de vibraciones y ruido, así como
para monitorizar el estado de estructuras y para la
generación de energía. Son idóneos para adaptarse
a superficies curvas. Los MFC son compuestos de
fibras piezocerámicas alineadas de manera unidireccional, los electrodos están interdigitados en una
lámina de poliamida y, están embebidos en un compuesto adhesivo de matriz de polímero.
Se han aplicado una gran variedad de métodos en
el procesamiento de las señales, los cuales se detallan en el Anexo I del artículo anterior [1]. El presente
artículo trata dichas señales primeramente filtrándolas mediante transformadas Wavelet. Seguidamente, se aplican algoritmos para extracción de
características de las señales, los cuales se aplican
a los escenarios considerados en los experimentos
para la clasificación de los mismos