427 research outputs found

    On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps

    Get PDF
    Electroencephalography (EEG) signals can be analyzed in the temporal, spatial, or frequency domains. Noise and artifacts during the data acquisition phase contaminate these signals adding difficulties in their analysis. Techniques such as Independent Component Analysis (ICA) require human intervention to remove noise and artifacts. Autoencoders have automatized artifact detection and removal by representing inputs in a lower dimensional latent space. However, little research is devoted to understanding the minimum dimension of such latent space that allows meaningful input reconstruction. Person-specific convolutional autoencoders are designed by manipulating the size of their latent space. A sliding window technique with overlapping is employed to segment varied-sized windows. Five topographic head-maps are formed in the frequency domain for each window. The latent space of autoencoders is assessed using the input reconstruction capacity and classification utility. Findings indicate that the minimal latent space dimension is 25% of the size of the topographic maps for achieving maximum reconstruction capacity and maximizing classification accuracy, which is achieved with a window length of at least 1 s and a shift of 125 ms, using the 128 Hz sampling rate. This research contributes to the body of knowledge with an architectural pipeline for eliminating redundant EEG data while preserving relevant features with deep autoencoders

    BCI applications based on artificial intelligence oriented to deep learning techniques

    Get PDF
    A Brain-Computer Interface, BCI, can decode the brain signals corresponding to the intentions of individuals who have lost neuromuscular connection, to reestablish communication to control external devices. To this aim, BCI acquires brain signals as Electroencephalography (EEG) or Electrocorticography (ECoG), uses signal processing techniques and extracts features to train classifiers for providing proper control instructions. BCI development has increased in the last decades, improving its performance through the use of different signal processing techniques for feature extraction and artificial intelligence approaches for classification, such as deep learning-oriented classifiers. All of these can assure more accurate assistive systems but also can enable an analysis of the learning process of signal characteristics for the classification task. Initially, this work proposes the use of a priori knowledge and a correlation measure to select the most discriminative ECoG signal electrodes. Then, signals are processed using spatial filtering and three different types of temporal filtering, followed by a classifier made of stacked autoencoders and a softmax layer to discriminate between ECoG signals from two types of visual stimuli. Results show that the average accuracy obtained is 97% (+/- 0.02%), which is similar to state-of-the-art techniques, nevertheless, this method uses minimal prior physiological and an automated statistical technique to select some electrodes to train the classifier. Also, this work presents classifier analysis, figuring out which are the most relevant signal features useful for visual stimuli classification. The features and physiological information such as the brain areas involved are compared. Finally, this research uses Convolutional Neural Networks (CNN) or Convnets to classify 5 categories of motor tasks EEG signals. Movement-related cortical potentials (MRCPs) are used as a priori information to improve the processing of time-frequency representation of EEG signals. Results show an increase of more than 25% in average accuracy compared to a state-of-the-art method that uses the same database. In addition, an analysis of CNN or ConvNets filters and feature maps is done to and the most relevant signal characteristics that can help classify the five types of motor tasks.DoctoradoDoctor en Ingeniería Eléctrica y Electrónic

    Towards a Deeper Understanding of Sleep Stages through their Representation in the Latent Space of Variational Autoencoders

    Get PDF
    Artificial neural networks show great success in sleep stage classification, with an accuracy comparable to human scoring. While their ability to learn from labelled electroencephalography (EEG) signals is widely researched, the underlying learning processes remain unexplored. Variational autoencoders can capture the underlying meaning of data by encoding it into a low-dimensional space. Regularizing this space furthermore enables the generation of realistic representations of data from latent space samples. We aimed to show that this model is able to generate realistic sleep EEG. In addition, the generated sequences from different areas of the latent space are shown to have inherent meaning. The current results show the potential of variational autoencoders in understanding sleep EEG data from the perspective of unsupervised machine learning

    Supervised and unsupervised training of deep autoencoder

    Get PDF
    2017 Fall.Includes bibliographical references.Deep learning has proven to be a very useful approach to learn complex data. Recent research in the fields of speech recognition, visual object recognition, natural language processing shows that deep generative models, which contain many layers of latent features, can learn complex data very efficiently. An autoencoder neural network with multiple layers can be used as a deep network to learn complex patterns in data. As training a multiple layer neural network is time consuming, a pre-training step has been employed to initialize the weights of a deep network to speed up the training process. In the pre-training step, each layer is trained individually and the output of each layer is wired to the input of the successive layers. After the pre-training, all the layers are stacked together to form the deep network, and then post training, also known as fine tuning, is done on the whole network to further improve the solution. The aforementioned way of training a deep network is known as stacked autoencoding and the deep neural network architecture is known as stack autoencoder. It is a very useful tool for classification as well as low dimensionality reduction. In this research we propose two new approaches to pre-train a deep autoencoder. We also propose a new supervised learning algorithm, called Centroid-encoding, which shows promising results in low dimensional embedding and classification. We use EEG data, gene expression data and MNIST hand written data to demonstrate the usefulness of our proposed methods

    Performance Analysis of Deep-Learning and Explainable AI Techniques for Detecting and Predicting Epileptic Seizures

    Get PDF
    Epilepsy is one of the most common neurological diseases globally. Notably, people in low to middle-income nations could not get proper epilepsy treatment due to the cost and availability of medical infrastructure. The risk of sudden unpredicted death in Epilepsy is considerably high. Medical statistics reveal that people with Epilepsy die more prematurely than those without the disease. Early and accurately diagnosing diseases in the medical field is challenging due to the complex disease patterns and the need for time-sensitive medical responses to the patients. Even though numerous machine learning and advanced deep learning techniques have been employed for the seizure stages classification and prediction, understanding the causes behind the decision is difficult, termed a black box problem. Hence, doctors and patients are confronted with the black box decision-making to initiate the appropriate treatment and understand the disease patterns respectively. Owing to the scarcity of epileptic Electroencephalography (EEG) data, training the deep learning model with diversified epilepsy knowledge is still critical. Explainable Artificial intelligence has become a potential solution to provide the explanation and result interpretation of the learning models. By applying the explainable AI, there is a higher possibility of examining the features that influence the decision-making that either the patient recorded from epileptic or non-epileptic EEG signals. This paper reviews the various deep learning and Explainable AI techniques used for detecting and predicting epileptic seizures  using EEG data. It provides a comparative analysis of the different techniques based on their performance
    • 

    corecore