11 research outputs found

    Deep learning in classifying depth of anesthesia (DoA)

    Get PDF
    © Springer Nature Switzerland AG 2019. This present study is what we think is one of the first studies to apply Deep Learning to learn depth of anesthesia (DoA) levels based solely on the raw EEG signal from a single channel (electrode) originated from many subjects under full anesthesia. The application of Deep Neural Networks to detect levels of Anesthesia from Electroencephalogram (EEG) is relatively new field and has not been addressed extensively in current researches as done with other fields. The peculiarities of the study emerges from not using any type of pre-processing at all which is usually done to the EEG signal in order to filter it or have it in better shape, but rather accept the signal in its raw nature. This could make the study a peculiar, especially with using new development tool that seldom has been used in deep learning which is the DeepLEarning4J (DL4J), the java programming environment platform made easy and tailored for deep neural network learning purposes. Results up to 97% in detecting two levels of Anesthesia have been reported successfully

    Motor imagery task classification using transformation based features

    Get PDF
    tThis paper proposes a feature extraction method named as LP QR, based on the decomposition of theLPC filter impulse response matrix of the signal of interest. This feature extraction method is inspired byLP SVD and is tested in the context of motor imagery electroencephalogram. The extracted features areclassified and benchmarked against extracted features of LP SVD method. The two applied methods arealso compared regarding the required execution time, which further highlights their respective meritsand demerits. This paper closely examines the contribution of EEG channels of these two informationextraction algorithms too. Consequently, a detailed analysis of the role of EEG channels concerning thenature of the extracted information is presented. This study is conducted on the BCI IIIa competitiondatabase of four motor imagery movements. The obtained results indicate that the proposed method isthe better choice if simplicity is demanded. The investigation into the role of EEG channels reveals thatlevel of contribution each channel can be quite dissimilar for different feature extraction algorithms

    On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps

    Get PDF
    Electroencephalography (EEG) signals can be analyzed in the temporal, spatial, or frequency domains. Noise and artifacts during the data acquisition phase contaminate these signals adding difficulties in their analysis. Techniques such as Independent Component Analysis (ICA) require human intervention to remove noise and artifacts. Autoencoders have automatized artifact detection and removal by representing inputs in a lower dimensional latent space. However, little research is devoted to understanding the minimum dimension of such latent space that allows meaningful input reconstruction. Person-specific convolutional autoencoders are designed by manipulating the size of their latent space. A sliding window technique with overlapping is employed to segment varied-sized windows. Five topographic head-maps are formed in the frequency domain for each window. The latent space of autoencoders is assessed using the input reconstruction capacity and classification utility. Findings indicate that the minimal latent space dimension is 25% of the size of the topographic maps for achieving maximum reconstruction capacity and maximizing classification accuracy, which is achieved with a window length of at least 1 s and a shift of 125 ms, using the 128 Hz sampling rate. This research contributes to the body of knowledge with an architectural pipeline for eliminating redundant EEG data while preserving relevant features with deep autoencoders

    RFNet: Riemannian Fusion Network for EEG-based Brain-Computer Interfaces

    Full text link
    This paper presents the novel Riemannian Fusion Network (RFNet), a deep neural architecture for learning spatial and temporal information from Electroencephalogram (EEG) for a number of different EEG-based Brain Computer Interface (BCI) tasks and applications. The spatial information relies on Spatial Covariance Matrices (SCM) of multi-channel EEG, whose space form a Riemannian Manifold due to the Symmetric and Positive Definite structure. We exploit a Riemannian approach to map spatial information onto feature vectors in Euclidean space. The temporal information characterized by features based on differential entropy and logarithm power spectrum density is extracted from different windows through time. Our network then learns the temporal information by employing a deep long short-term memory network with a soft attention mechanism. The output of the attention mechanism is used as the temporal feature vector. To effectively fuse spatial and temporal information, we use an effective fusion strategy, which learns attention weights applied to embedding-specific features for decision making. We evaluate our proposed framework on four public datasets from three popular fields of BCI, notably emotion recognition, vigilance estimation, and motor imagery classification, containing various types of tasks such as binary classification, multi-class classification, and regression. RFNet approaches the state-of-the-art on one dataset (SEED) and outperforms other methods on the other three datasets (SEED-VIG, BCI-IV 2A, and BCI-IV 2B), setting new state-of-the-art values and showing the robustness of our framework in EEG representation learning

    Portable Brain Computer Interface (BCI) in the Intensive Care Unit (ICU)

    Full text link
    Steady State Visual Evoked Potentials (SSVEPs) have been the most commonly utilized Brain Computer Interface (BCI) modality due to their relatively high signal-to-noise ratio, high information transfer rates, and minimum training prerequisites. Up to date Canonical Correlation Analysis (CCA) and its extensions have been widely utilized for SSVEP target frequency identification. However, reliable and robust SSVEP identification performance is still a challenge, particularly for portable BCI systems operating in an Intensive Care Unit (ICU) department filled with various source of noise. As such, I propose an innovative partition-based feature extraction method that entails partitioning the score spaces of CCA and Power Spectral Density Analysis (PSDA) in three cases, extract efficient descriptors from each partition, then concatenate the extracted measures to generate more discriminative fusion spaces. Moreover, I investigate transforming the fusion spaces to lower dimensions utilizing Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). Finally, to validate the proposed method, I compare the performance of the partition-based feature extraction and score space fusion method to a well-established SSVEP identification method based on Multivariate Linear Regression (MLR). The experimental results of this investigation report that the proposed method enhances the identification performance of the CCA-based BCI system from 63% to 78%. The identification performance is further improved to 98% after the discriminative transformation with LDA outperforming MLR, which achieved an average overall 86% identification accuracy. As such, the proposed method is a promising approach to implement and operate BCI systems in the ICU.Master of ScienceComputer and Information Science, College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/143184/1/Portable Brain Computer Interface (BCI) in the Intensive Care Unit (ICU) (1).pdfDescription of Portable Brain Computer Interface (BCI) in the Intensive Care Unit (ICU) (1).pdf : Thesi

    Sobre las propiedades discriminativas del análisisen componentes principales basado en la norma L1

    Get PDF
    El análisis de componentes principales (PCA) basado en la norma L1 es una técnica cada vez más popular para el análisis de datos multivariantes. Como idea intuitiva se utiliza que, para las direcciones en las que la nube se extiende por el espacio, las proyecciones de esos puntos han de tener una gran varianza. Este criterio es muy efectivo, pero tiene el inconveniente de que la varianza es un estadístico poco robusto: si los datos están contaminados con valores atípicos (outliers), las estimaciones de la varianza tendrán un gran error. Como solución, se ha propuesto sustituir la varianza por el promedio del valor absoluto de las proyecciones. Esta técnica resultante es lo que se ha denominado PCA basado en la norma L1 o L1-PCA, consiguiendo algoritmos muy robustos. Esta Tesis demuestra que un vínculo entre L1-PCA y la transformada de Fukunaga-Koontz (FKT, del inglés Fukunaga-Koontz transform). En su formulación original, L1-PCA proyecta los datos de manera que maximiza, en promedio, el valor absoluto de las proyecciones. De esta forma, se consiguen resultados similares al PCA tradicional. Ahora bien, manteniendo el valor absoluto como función objetivo, pero cambiando maximizar por minimizar, L1-PCA proporciona un resultado equivalente al que se obtiene mediante FKT. La importancia práctica de este resultado es que la FKT estándar es una técnica supervisada, es decir, para estimar los parámetros de la transformación, requiere un conjunto de datos de entrenamiento pertenecientes a cada una de las clases correctamente etiquetados. Por el contrario, minimizar el valor absoluto puede llevarse a cabo de manera totalmente no supervisada, haciendo innecesarios por ello los datos de entrenamiento. De esta forma, se ofrece una alternativa completamente novedosa para el cálculo de la FKT. Esto abre nuevas líneas de investigación en el área del aprendizaje automático o 'machine learning'

    Signal processing for automated EEG quality assessment

    Full text link
    An automated signal quality assessment method was proposed for the EEG signals, which will help in testing new BCI algorithms so that the testing can be made on high quality signals only. This research includes the development of novel feature extraction technique and a new clustering algorithm for EEG signals

    Probabilistic Common Spatial Patterns for Multichannel EEG Analysis

    No full text
    Common spatial patterns (CSP) is a well-known spatial filtering algorithm for multichannel electroencephalogram (EEG) analysis. In this paper, we cast the CSP algorithm in a probabilistic modeling setting. Specifically, probabilistic CSP (P-CSP) is proposed as a generic EEG spatio-temporal modeling framework that subsumes the CSP and regularized CSP algorithms. The proposed framework enables us to resolve the overfitting issue of CSP in a principled manner. We derive statistical inference algorithms that can alleviate the issue of local optima. In particular, an efficient algorithm based on eigendecomposition is developed for maximum a posteriori (MAP) estimation in the case of isotropic noise. For more general cases, a variational algorithm is developed for group-wise sparse Bayesian learning for the P-CSP model and for automatically determining the model size. The two proposed algorithms are validated on a simulated data set. Their practical efficacy is also demonstrated by successful applications to single-trial classifications of three motor imagery EEG data sets and by the spatio-temporal pattern analysis of one EEG data set recorded in a Stroop color naming task.State Key Laboratories of China (Specialized Research Fund for the Doctoral Program of Higher Education of China 20130172120032)Guangdong Natural Science Foundation (S201301001344)National High-Tech R&D (863) Program of China (Grant 2012 AA 011601)National Natural Science Foundation (China) (Grant 91120305)National Institutes of Health (U.S.) (Grant DP1-OD003646
    corecore