7 research outputs found

    SeizureNet: Multi-Spectral Deep Feature Learning for Seizure Type Classification

    Full text link
    Automatic classification of epileptic seizure types in electroencephalograms (EEGs) data can enable more precise diagnosis and efficient management of the disease. This task is challenging due to factors such as low signal-to-noise ratios, signal artefacts, high variance in seizure semiology among epileptic patients, and limited availability of clinical data. To overcome these challenges, in this paper, we present SeizureNet, a deep learning framework which learns multi-spectral feature embeddings using an ensemble architecture for cross-patient seizure type classification. We used the recently released TUH EEG Seizure Corpus (V1.4.0 and V1.5.2) to evaluate the performance of SeizureNet. Experiments show that SeizureNet can reach a weighted F1 score of up to 0.94 for seizure-wise cross validation and 0.59 for patient-wise cross validation for scalp EEG based multi-class seizure type classification. We also show that the high-level feature embeddings learnt by SeizureNet considerably improve the accuracy of smaller networks through knowledge distillation for applications with low-memory constraints

    Deep learning pipeline for quality filtering of MRSI spectra.

    Get PDF
    With the rise of novel 3D magnetic resonance spectroscopy imaging (MRSI) acquisition protocols in clinical practice, which are capable of capturing a large number of spectra from a subject's brain, there is a need for an automated preprocessing pipeline that filters out bad-quality spectra and identifies contaminated but salvageable spectra prior to the metabolite quantification step. This work introduces such a pipeline based on an ensemble of deep-learning classifiers. The dataset consists of 36,338 spectra from one healthy subject and five brain tumor patients, acquired with an EPSI variant, which implemented a novel type of spectral editing named SLOtboom-Weng (SLOW) editing on a 7T MR scanner. The spectra were labeled manually by an expert into four classes of spectral quality as follows: (i) noise, (ii) spectra greatly influenced by lipid-related artifacts (deemed not to contain clinical information), (iii) spectra containing metabolic information slightly contaminated by lipid signals, and (iv) good-quality spectra. The AI model consists of three pairs of networks, each comprising a convolutional autoencoder and a multilayer perceptron network. In the classification step, the encoding half of the autoencoder is kept as a dimensionality reduction tool, while the fully connected layers are added to its output. Each of the three pairs of networks is trained on different representations of spectra (real, imaginary, or both), aiming at robust decision-making. The final class is assigned via a majority voting scheme. The F1 scores obtained on the test dataset for the four previously defined classes are 0.96, 0.93, 0.82, and 0.90, respectively. The arguably lower value of 0.82 was reached for the least represented class of spectra mildly influenced by lipids. Not only does the proposed model minimise the required user interaction, but it also greatly reduces the computation time at the metabolite quantification step (by selecting a subset of spectra worth quantifying) and enforces the display of only clinically relevant information

    Deep learning as a tool for neural data analysis: Speech classification and cross-frequency coupling in human sensorimotor cortex.

    Get PDF
    A fundamental challenge in neuroscience is to understand what structure in the world is represented in spatially distributed patterns of neural activity from multiple single-trial measurements. This is often accomplished by learning a simple, linear transformations between neural features and features of the sensory stimuli or motor task. While successful in some early sensory processing areas, linear mappings are unlikely to be ideal tools for elucidating nonlinear, hierarchical representations of higher-order brain areas during complex tasks, such as the production of speech by humans. Here, we apply deep networks to predict produced speech syllables from a dataset of high gamma cortical surface electric potentials recorded from human sensorimotor cortex. We find that deep networks had higher decoding prediction accuracy compared to baseline models. Having established that deep networks extract more task relevant information from neural data sets relative to linear models (i.e., higher predictive accuracy), we next sought to demonstrate their utility as a data analysis tool for neuroscience. We first show that deep network's confusions revealed hierarchical latent structure in the neural data, which recapitulated the underlying articulatory nature of speech motor control. We next broadened the frequency features beyond high-gamma and identified a novel high-gamma-to-beta coupling during speech production. Finally, we used deep networks to compare task-relevant information in different neural frequency bands, and found that the high-gamma band contains the vast majority of information relevant for the speech prediction task, with little-to-no additional contribution from lower-frequency amplitudes. Together, these results demonstrate the utility of deep networks as a data analysis tool for basic and applied neuroscience

    Noise Reduction of EEG Signals Using Autoencoders Built Upon GRU based RNN Layers

    Get PDF
    Understanding the cognitive and functional behaviour of the brain by its electrical activity is an important area of research. Electroencephalography (EEG) is a method that measures and record electrical activities of the brain from the scalp. It has been used for pathology analysis, emotion recognition, clinical and cognitive research, diagnosing various neurological and psychiatric disorders and for other applications. Since the EEG signals are sensitive to activities other than the brain ones, such as eye blinking, eye movement, head movement, etc., it is not possible to record EEG signals without any noise. Thus, it is very important to use an efficient noise reduction technique to get more accurate recordings. Numerous traditional techniques such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), wavelet transformations and machine learning techniques were proposed for reducing the noise in EEG signals. The aim of this paper is to investigate the effectiveness of stacked autoencoders built upon Gated Recurrent Unit (GRU) based Recurrent Neural Network (RNN) layers (GRU-AE) against PCA. To achieve this, Harrell-Davis decile values for the reconstructed signals’ signal-to- noise ratio distributions were compared and it was found that the GRU-AE outperformed PCA for noise reduction of EEG signals

    Noise Reduction in EEG Signals using Convolutional Autoencoding Techniques

    Get PDF
    The presence of noise in electroencephalography (EEG) signals can significantly reduce the accuracy of the analysis of the signal. This study assesses to what extent stacked autoencoders designed using one-dimensional convolutional neural network layers can reduce noise in EEG signals. The EEG signals, obtained from 81 people, were processed by a two-layer one-dimensional convolutional autoencoder (CAE), whom performed 3 independent button pressing tasks. The signal-to-noise ratios (SNRs) of the signals before and after processing were calculated and the distributions of the SNRs were compared. The performance of the model was compared to noise reduction performance of Principal Component Analysis, with 95% explained variance, by comparing the Harrell-Davis decile differences between the SNR distributions of both methods and the raw signal SNR distribution for each task. It was found that the CAE outperformed PCA for the full dataset across all three tasks, however the CAE did not outperform PCA for the person specific datasets in any of the three tasks. The results indicate that CAEs can perform better than PCA for noise reduction in EEG signals, but performance of the model may be training size dependent

    Leveraging Artificial Intelligence to Improve EEG-fNIRS Data Analysis

    Get PDF
    La spectroscopie proche infrarouge fonctionnelle (fNIRS) est apparue comme une technique de neuroimagerie qui permet une surveillance non invasive et à long terme de l'hémodynamique corticale. Les technologies de neuroimagerie multimodale en milieu clinique permettent d'étudier les maladies neurologiques aiguës et chroniques. Dans ce travail, nous nous concentrons sur l'épilepsie - un trouble chronique du système nerveux central affectant près de 50 millions de personnes dans le monde entier prédisposant les individus affectés à des crises récurrentes. Les crises sont des aberrations transitoires de l'activité électrique du cerveau qui conduisent à des symptômes physiques perturbateurs tels que des changements aigus ou chroniques des compétences cognitives, des hallucinations sensorielles ou des convulsions de tout le corps. Environ un tiers des patients épileptiques sont récalcitrants au traitement pharmacologique et ces crises intraitables présentent un risque grave de blessure et diminuent la qualité de vie globale. Dans ce travail, nous étudions 1. l'utilité des informations hémodynamiques dérivées des signaux fNIRS dans une tâche de détection des crises et les avantages qu'elles procurent dans un environnement multimodal par rapport aux signaux électroencéphalographiques (EEG) seuls, et 2. la capacité des signaux neuronaux, dérivé de l'EEG, pour prédire l'hémodynamique dans le cerveau afin de mieux comprendre le cerveau épileptique. Sur la base de données rétrospectives EEG-fNIRS recueillies auprès de 40 patients épileptiques et utilisant de nouveaux modèles d'apprentissage en profondeur, la première étude de cette thèse suggère que les signaux fNIRS offrent une sensibilité et une spécificité accrues pour la détection des crises par rapport à l'EEG seul. La validation du modèle a été effectuée à l'aide de l'ensemble de données CHBMIT open source documenté et bien référencé avant d'utiliser notre ensemble de données EEG-fNIRS multimodal interne. Les résultats de cette étude ont démontré que fNIRS améliore la détection des crises par rapport à l'EEG seul et ont motivé les expériences ultérieures qui ont déterminé la capacité prédictive d'un modèle d'apprentissage approfondi développé en interne pour décoder les signaux d'état de repos hémodynamique à partir du spectre complet et d'une bande de fréquences neuronale codée spécifique signaux d'état de repos (signaux sans crise). Ces résultats suggèrent qu'un autoencodeur multimodal peut apprendre des relations multimodales pour prédire les signaux d'état de repos. Les résultats suggèrent en outre que des gammes de fréquences EEG plus élevées prédisent l'hémodynamique avec une erreur de reconstruction plus faible par rapport aux gammes de fréquences EEG plus basses. De plus, les connexions fonctionnelles montrent des modèles spatiaux similaires entre l'état de repos expérimental et les prédictions fNIRS du modèle. Cela démontre pour la première fois que l'auto-encodage intermodal à partir de signaux neuronaux peut prédire l'hémodynamique cérébrale dans une certaine mesure. Les résultats de cette thèse avancent le potentiel de l'utilisation d'EEG-fNIRS pour des tâches cliniques pratiques (détection des crises, prédiction hémodynamique) ainsi que l'examen des relations fondamentales présentes dans le cerveau à l'aide de modèles d'apprentissage profond. S'il y a une augmentation du nombre d'ensembles de données disponibles à l'avenir, ces modèles pourraient être en mesure de généraliser les prédictions qui pourraient éventuellement conduire à la technologie EEG-fNIRS à être utilisée régulièrement comme un outil clinique viable dans une grande variété de troubles neuropathologiques.----------ABSTRACT Functional near-infrared spectroscopy (fNIRS) has emerged as a neuroimaging technique that allows for non-invasive and long-term monitoring of cortical hemodynamics. Multimodal neuroimaging technologies in clinical settings allow for the investigation of acute and chronic neurological diseases. In this work, we focus on epilepsy—a chronic disorder of the central nervous system affecting almost 50 million people world-wide predisposing affected individuals to recurrent seizures. Seizures are transient aberrations in the brain's electrical activity that lead to disruptive physical symptoms such as acute or chronic changes in cognitive skills, sensory hallucinations, or whole-body convulsions. Approximately a third of epileptic patients are recalcitrant to pharmacological treatment and these intractable seizures pose a serious risk for injury and decrease overall quality of life. In this work, we study 1) the utility of hemodynamic information derived from fNIRS signals in a seizure detection task and the benefit they provide in a multimodal setting as compared to electroencephalographic (EEG) signals alone, and 2) the ability of neural signals, derived from EEG, to predict hemodynamics in the brain in an effort to better understand the epileptic brain. Based on retrospective EEG-fNIRS data collected from 40 epileptic patients and utilizing novel deep learning models, the first study in this thesis suggests that fNIRS signals offer increased sensitivity and specificity metrics for seizure detection when compared to EEG alone. Model validation was performed using the documented open source and well referenced CHBMIT dataset before using our in-house multimodal EEG-fNIRS dataset. The results from this study demonstrated that fNIRS improves seizure detection as compared to EEG alone and motivated the subsequent experiments which determined the predictive capacity of an in-house developed deep learning model to decode hemodynamic resting state signals from full spectrum and specific frequency band encoded neural resting state signals (seizure free signals). These results suggest that a multimodal autoencoder can learn multimodal relations to predict resting state signals. Findings further suggested that higher EEG frequency ranges predict hemodynamics with lower reconstruction error in comparison to lower EEG frequency ranges. Furthermore, functional connections show similar spatial patterns between experimental resting state and model fNIRS predictions. This demonstrates for the first time that intermodal autoencoding from neural signals can predict cerebral hemodynamics to a certain extent. The results of this thesis advance the potential of using EEG-fNIRS for practical clinical tasks (seizure detection, hemodynamic prediction) as well as examining fundamental relationships present in the brain using deep learning models. If there is an increase in the number of datasets available in the future, these models may be able to generalize predictions which would possibly lead to EEG-fNIRS technology to be routinely used as a viable clinical tool in a wide variety of neuropathological disorders

    Feature Extraction with Stacked Autoencoders for Epileptic Seizure Detection

    No full text
    Scalp electroencephalogram (EEG), a recording of the brain's electrical activity, has been used to diagnose and detect epileptic seizures for a long time. However, most researchers have implemented seizure detectors by manually hand-engineering features from observed EEG data, and used them in seizure detection, which might not scale well to new patterns of seizures. In this paper, we investigate the possibility of utilising unsupervised feature learning, the recent development of deep learning, to automatically learn features from raw, unlabelled EEG data that are representative enough to be used in seizure detection. We develop patient-specific seizure detectors by using stacked autoencoders and logistic classifiers. A two-step training consisting of the greedy layer-wise and the global fine-tuning was used to train our detectors. The evaluation was performed by using labelled dataset from the CHB-MIT database, and the results showed that all of the test seizures were detected with a mean latency of 3.36 seconds, and a low false detection rate
    corecore