140 research outputs found

    Noise Reduction of EEG Signals Using Autoencoders Built Upon GRU based RNN Layers

    Get PDF
    Understanding the cognitive and functional behaviour of the brain by its electrical activity is an important area of research. Electroencephalography (EEG) is a method that measures and record electrical activities of the brain from the scalp. It has been used for pathology analysis, emotion recognition, clinical and cognitive research, diagnosing various neurological and psychiatric disorders and for other applications. Since the EEG signals are sensitive to activities other than the brain ones, such as eye blinking, eye movement, head movement, etc., it is not possible to record EEG signals without any noise. Thus, it is very important to use an efficient noise reduction technique to get more accurate recordings. Numerous traditional techniques such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), wavelet transformations and machine learning techniques were proposed for reducing the noise in EEG signals. The aim of this paper is to investigate the effectiveness of stacked autoencoders built upon Gated Recurrent Unit (GRU) based Recurrent Neural Network (RNN) layers (GRU-AE) against PCA. To achieve this, Harrell-Davis decile values for the reconstructed signals’ signal-to- noise ratio distributions were compared and it was found that the GRU-AE outperformed PCA for noise reduction of EEG signals

    A novel framework using deep auto-encoders based linear model for data classification

    Get PDF
    This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm. All the sensitive and high-level features are extracted by using the first auto-encoder which is wired to the second auto-encoder, followed by a Softmax function layer to classify the extracted features obtained from the second layer. The two auto-encoders and the Softmax classifier are stacked in order to be trained in a supervised approach using the well-known backpropagation algorithm to enhance the performance of the neural network. Afterwards, the linear model transforms the calculated output of the deep stacked sparse auto-encoder to a value close to the anticipated output. This simple transformation increases the overall data classification performance of the stacked sparse auto-encoder architecture. The PSO algorithm allows the estimation of the parameters of the linear model in a metaheuristic policy. The proposed framework is validated by using three public datasets, which present promising results when compared with the current literature. Furthermore, the framework can be applied to any data classification problem by considering minor updates such as altering some parameters including input features, hidden neurons and output classes. Keywords: deep sparse auto-encoders, medical diagnosis, linear model, data classification, PSO algorithmpublishedVersio

    Adversarial Variational Embedding for Robust Semi-supervised Learning

    Full text link
    Semi-supervised learning is sought for leveraging the unlabelled data when labelled data is difficult or expensive to acquire. Deep generative models (e.g., Variational Autoencoder (VAE)) and semisupervised Generative Adversarial Networks (GANs) have recently shown promising performance in semi-supervised classification for the excellent discriminative representing ability. However, the latent code learned by the traditional VAE is not exclusive (repeatable) for a specific input sample, which prevents it from excellent classification performance. In particular, the learned latent representation depends on a non-exclusive component which is stochastically sampled from the prior distribution. Moreover, the semi-supervised GAN models generate data from pre-defined distribution (e.g., Gaussian noises) which is independent of the input data distribution and may obstruct the convergence and is difficult to control the distribution of the generated data. To address the aforementioned issues, we propose a novel Adversarial Variational Embedding (AVAE) framework for robust and effective semi-supervised learning to leverage both the advantage of GAN as a high quality generative model and VAE as a posterior distribution learner. The proposed approach first produces an exclusive latent code by the model which we call VAE++, and meanwhile, provides a meaningful prior distribution for the generator of GAN. The proposed approach is evaluated over four different real-world applications and we show that our method outperforms the state-of-the-art models, which confirms that the combination of VAE++ and GAN can provide significant improvements in semisupervised classification.Comment: 9 pages, Accepted by Research Track in KDD 201

    EEG Based Eye State Classification using Deep Belief Network and Stacked AutoEncoder

    Get PDF
    A Brain-Computer Interface (BCI) provides an alternative communication interface between the human brain and a computer. The Electroencephalogram (EEG) signals are acquired, processed and machine learning algorithms are further applied to extract useful information.  During  EEG acquisition,   artifacts  are induced due to involuntary eye movements or eye blink, casting adverse effects  on system performance. The aim of this research is to predict eye states from EEG signals using Deep learning architectures and present improved classifier models. Recent studies reflect that Deep Neural Networks are trending state of the art Machine learning approaches. Therefore, the current work presents the implementation of  Deep Belief Network (DBN) and Stacked AutoEncoders (SAE) as Classifiers with encouraging performance accuracy.  One of the designed  SAE models outperforms the  performance of DBN and the models presented in existing research by an impressive error rate of 1.1% on the test set bearing accuracy of 98.9%. The findings in this study,  may provide a contribution towards the state of  the  art performance on the problem of  EEG based eye state classification

    BCI applications based on artificial intelligence oriented to deep learning techniques

    Get PDF
    A Brain-Computer Interface, BCI, can decode the brain signals corresponding to the intentions of individuals who have lost neuromuscular connection, to reestablish communication to control external devices. To this aim, BCI acquires brain signals as Electroencephalography (EEG) or Electrocorticography (ECoG), uses signal processing techniques and extracts features to train classifiers for providing proper control instructions. BCI development has increased in the last decades, improving its performance through the use of different signal processing techniques for feature extraction and artificial intelligence approaches for classification, such as deep learning-oriented classifiers. All of these can assure more accurate assistive systems but also can enable an analysis of the learning process of signal characteristics for the classification task. Initially, this work proposes the use of a priori knowledge and a correlation measure to select the most discriminative ECoG signal electrodes. Then, signals are processed using spatial filtering and three different types of temporal filtering, followed by a classifier made of stacked autoencoders and a softmax layer to discriminate between ECoG signals from two types of visual stimuli. Results show that the average accuracy obtained is 97% (+/- 0.02%), which is similar to state-of-the-art techniques, nevertheless, this method uses minimal prior physiological and an automated statistical technique to select some electrodes to train the classifier. Also, this work presents classifier analysis, figuring out which are the most relevant signal features useful for visual stimuli classification. The features and physiological information such as the brain areas involved are compared. Finally, this research uses Convolutional Neural Networks (CNN) or Convnets to classify 5 categories of motor tasks EEG signals. Movement-related cortical potentials (MRCPs) are used as a priori information to improve the processing of time-frequency representation of EEG signals. Results show an increase of more than 25% in average accuracy compared to a state-of-the-art method that uses the same database. In addition, an analysis of CNN or ConvNets filters and feature maps is done to and the most relevant signal characteristics that can help classify the five types of motor tasks.DoctoradoDoctor en Ingeniería Eléctrica y Electrónic
    • …
    corecore