690 research outputs found

    Multimodal Classification with Deep Convolutional-Recurrent Neural Networks for Electroencephalography

    Full text link
    Electroencephalography (EEG) has become the most significant input signal for brain computer interface (BCI) based systems. However, it is very difficult to obtain satisfactory classification accuracy due to traditional methods can not fully exploit multimodal information. Herein, we propose a novel approach to modeling cognitive events from EEG data by reducing it to a video classification problem, which is designed to preserve the multimodal information of EEG. In addition, optical flow is introduced to represent the variant information of EEG. We train a deep neural network (DNN) with convolutional neural network (CNN) and recurrent neural network (RNN) for the EEG classification task by using EEG video and optical flow. The experiments demonstrate that our approach has many advantages, such as more robustness and more accuracy in EEG classification tasks. According to our approach, we designed a mixed BCI-based rehabilitation support system to help stroke patients perform some basic operations.Comment: 10 pages, 6 figure

    Attention-based Transfer Learning for Brain-computer Interface

    Full text link
    Different functional areas of the human brain play different roles in brain activity, which has not been paid sufficient research attention in the brain-computer interface (BCI) field. This paper presents a new approach for electroencephalography (EEG) classification that applies attention-based transfer learning. Our approach considers the importance of different brain functional areas to improve the accuracy of EEG classification, and provides an additional way to automatically identify brain functional areas associated with new activities without the involvement of a medical professional. We demonstrate empirically that our approach out-performs state-of-the-art approaches in the task of EEG classification, and the results of visualization indicate that our approach can detect brain functional areas related to a certain task.Comment: In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019, 12 - 17 May, 2019, Brighton, U

    A Survey on Deep Learning based Brain Computer Interface: Recent Advances and New Frontiers

    Full text link
    Brain-Computer Interface (BCI) bridges the human's neural world and the outer physical world by decoding individuals' brain signals into commands recognizable by computer devices. Deep learning has lifted the performance of brain-computer interface systems significantly in recent years. In this article, we systematically investigate brain signal types for BCI and related deep learning concepts for brain signal analysis. We then present a comprehensive survey of deep learning techniques used for BCI, by summarizing over 230 contributions most published in the past five years. Finally, we discuss the applied areas, opening challenges, and future directions for deep learning-based BCI.Comment: summarized more than 230 papers most published in the last five year

    Deep Learning in Bioinformatics

    Full text link
    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e., omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e., deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies.Comment: Accepted for Briefings in Bioinformatics (18-Jun-2016

    Deep learning with convolutional neural networks for EEG decoding and visualization

    Full text link
    PLEASE READ AND CITE THE REVISED VERSION at Human Brain Mapping: http://onlinelibrary.wiley.com/doi/10.1002/hbm.23730/full Code available here: https://github.com/robintibor/braindecodeComment: A revised manuscript (with the new title) has been accepted at Human Brain Mapping, see http://onlinelibrary.wiley.com/doi/10.1002/hbm.23730/ful

    Deep Learning of Human Perception in Audio Event Classification

    Full text link
    In this paper, we introduce our recent studies on human perception in audio event classification by different deep learning models. In particular, the pre-trained model VGGish is used as feature extractor to process audio data, and DenseNet is trained by and used as feature extractor for our electroencephalography (EEG) data. The correlation between audio stimuli and EEG is learned in a shared space. In the experiments, we record brain activities (EEG signals) of several subjects while they are listening to music events of 8 audio categories selected from Google AudioSet, using a 16-channel EEG headset with active electrodes. Our experimental results demonstrate that i) audio event classification can be improved by exploiting the power of human perception, and ii) the correlation between audio stimuli and EEG can be learned to complement audio event understanding

    Universal Joint Feature Extraction for P300 EEG Classification using Multi-task Autoencoder

    Full text link
    The process of recording Electroencephalography (EEG) signals is onerous and requires massive storage to store signals at an applicable frequency rate. In this work, we propose the EventRelated Potential Encoder Network (ERPENet); a multi-task autoencoder-based model, that can be applied to any ERP-related tasks. The strength of ERPENet lies in its capability to handle various kinds of ERP datasets and its robustness across multiple recording setups, enabling joint training across datasets. ERPENet incorporates Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), in an autoencoder setup, which tries to simultaneously compress the input EEG signal and extract related P300 features into a latent vector. Here, we can infer the process for generating the latent vector as universal joint feature extraction. The network also includes a classification part for attended and unattended events classification as an auxiliary task. We experimented on six different P300 datasets. The results show that the latent vector exhibits better compression capability than the previous state-of-the-art semi-supervised autoencoder model. For attended and unattended events classification, pre-trained weights are adopted as initial weights and tested on unseen P300 datasets to evaluate the adaptability of the model, which shortens the training process as compared to using random Xavier weight initialization. At the compression rate of 6.84, the classification accuracy outperforms conventional P300 classification models: XdawnLDA, DeepConvNet, and EEGNet achieving 79.37% - 88.52% classification accuracy depending on the dataset

    Sleep Arousal Detection from Polysomnography using the Scattering Transform and Recurrent Neural Networks

    Full text link
    Sleep disorders are implicated in a growing number of health problems. In this paper, we present a signal-processing/machine learning approach to detecting arousals in the multi-channel polysomnographic recordings of the Physionet/CinC Challenge2018 dataset. Methods: Our network architecture consists of two components. Inputs were presented to a Scattering Transform (ST) representation layer which fed a recurrent neural network for sequence learning using three layers of Long Short-Term Memory (LSTM). The STs were calculated for each signal with downsampling parameters chosen to give approximately 1 s time resolution, resulting in an eighteen-fold data reduction. The LSTM layers then operated at this downsampled rate. Results: The proposed approach detected arousal regions on the 10% random sample of the hidden test set with an AUROC of 88.0% and an AUPRC of 42.1%.Comment: Computing in Cardiology 2018, 4 pages and 5 figure

    Convolutional Neural Network Approach for EEG-based Emotion Recognition using Brain Connectivity and its Spatial Information

    Full text link
    Emotion recognition based on electroencephalography (EEG) has received attention as a way to implement human-centric services. However, there is still much room for improvement, particularly in terms of the recognition accuracy. In this paper, we propose a novel deep learning approach using convolutional neural networks (CNNs) for EEG-based emotion recognition. In particular, we employ brain connectivity features that have not been used with deep learning models in previous studies, which can account for synchronous activations of different brain regions. In addition, we develop a method to effectively capture asymmetric brain activity patterns that are important for emotion recognition. Experimental results confirm the effectiveness of our approach.Comment: Accepted for the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018

    SPECTRO-TEMPORAL BASED QUANTIFICATION OF BRAIN FUNCTIONS IN NEUROLOGICAL DISORDERS

    Get PDF
    Human brain studies that quantify neural functions using neuroimaging techniques have many applications related to neurological disorders, including characterizing symptoms, identifying biomarkers, and enhancing existing brain computer interface (BCI) systems. The first major goal of this dissertation is to quantify the neural functions associated with neurological impairments, specifically in amyotrophic lateral sclerosis (ALS), using two neuroimaging modalities, electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), that respectively characterize electrical and hemodynamic neural functions. The next major goal is to integrate these modalities using state-of-the-art techniques including time-frequency based decompositions and functional and directional connectivity methods, and to use the quantified neural functions to classify different brain states through leading edge techniques, including information theory based fused feature optimization and deep learning based automatic feature extraction. In this dissertation, we explored the non-motor neural alterations in ALS patients reflected by simultaneously recorded EEG-fNIRS data both during task performance and in the resting state. Our results revealed significant neural alterations in ALS patients compared to healthy controls. Moreover, these neural signatures were used to classify data as coming from ALS patients versus healthy controls. For this purpose, we used mutual information-based fused feature optimization for EEG-fNIRS to select the best features from all the extracted neural markers, which considerably improved classification performance in classifying data as from people with ALS vs. healthy controls based on mental workload. These results support the idea of using complementary features from fused EEG-fNIRS in neuro-clinical studies for the optimized decoding of neural information, and thus, improving the performance of relevant applications, including BCIs and neuro-pathological diagnosis. In addition, we examined our findings in motor imagery classification, another fundamental processing step in applying BCIs for people with neurological disorders, including ALS patients. To do this, we proposed a convolutional neural network-based classification architecture for automatic feature extraction from EEG-fNIRS data, which outperformed conventional classification methods using manually extracted features. These outcomes suggest promising improvements in BCI performance using multimodal EEG-fNIRS and deep learning classifiers with automatic feature extraction, which can be utilized in clinical applications for people with neurological disorders including ALS patients. These findings can be further developed to automate the optimal quantification of neural functions in neurological disorders, with less dependence on prior knowledge, and thereby facilitate BCIs and other clinical applications for patients with neurological disorders
    corecore