261 research outputs found

    Optimal Resource Allocation Using Deep Learning-Based Adaptive Compression For Mhealth Applications

    Get PDF
    In the last few years the number of patients with chronic diseases that require constant monitoring increases rapidly; which motivates the researchers to develop scalable remote health applications. Nevertheless, transmitting big real-time data through a dynamic network limited by the bandwidth, end-to-end delay and transmission energy; will be an obstacle against having an efficient transmission of the data. The problem can be resolved by applying data reduction techniques on the vital signs at the transmitter side and reconstructing the data at the receiver side (i.e. the m-Health center). However, a new problem will be introduced which is the ability to receive the vital signs at the server side with an acceptable distortion rate (i.e. deformation of vital signs because of inefficient data reduction). In this thesis, we integrate efficient data reduction with wireless networking to deliver an adaptive compression with an acceptable distortion, while reacting to the wireless network dynamics such as channel fading and user mobility. A Deep Learning (DL) approach was used to implement an adaptive compression technique to compress and reconstruct the vital signs in general and specifically the Electroencephalogram Signal (EEG) with the minimum distortion. Then, a resource allocation framework was introduced to minimize the transmission energy along with the distortion of the reconstructed signa

    hvEEGNet: exploiting hierarchical VAEs on EEG data for neuroscience applications

    Full text link
    With the recent success of artificial intelligence in neuroscience, a number of deep learning (DL) models were proposed for classification, anomaly detection, and pattern recognition tasks in electroencephalography (EEG). EEG is a multi-channel time-series that provides information about the individual brain activity for diagnostics, neuro-rehabilitation, and other applications (including emotions recognition). Two main issues challenge the existing DL-based modeling methods for EEG: the high variability between subjects and the low signal-to-noise ratio making it difficult to ensure a good quality in the EEG data. In this paper, we propose two variational autoencoder models, namely vEEGNet-ver3 and hvEEGNet, to target the problem of high-fidelity EEG reconstruction. We properly designed their architectures using the blocks of the well-known EEGNet as the encoder, and proposed a loss function based on dynamic time warping. We tested the models on the public Dataset 2a - BCI Competition IV, where EEG was collected from 9 subjects and 22 channels. hvEEGNet was found to reconstruct the EEG data with very high-fidelity, outperforming most previous solutions (including our vEEGNet-ver3 ). Furthermore, this was consistent across all subjects. Interestingly, hvEEGNet made it possible to discover that this popular dataset includes a number of corrupted EEG recordings that might have influenced previous literature results. We also investigated the training behaviour of our models and related it with the quality and the size of the input EEG dataset, aiming at opening a new research debate on this relationship. In the future, hvEEGNet could be used as anomaly (e.g., artefact) detector in large EEG datasets to support the domain experts, but also the latent representations it provides could be used in other classification problems and EEG data generation

    On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps

    Get PDF
    Electroencephalography (EEG) signals can be analyzed in the temporal, spatial, or frequency domains. Noise and artifacts during the data acquisition phase contaminate these signals adding difficulties in their analysis. Techniques such as Independent Component Analysis (ICA) require human intervention to remove noise and artifacts. Autoencoders have automatized artifact detection and removal by representing inputs in a lower dimensional latent space. However, little research is devoted to understanding the minimum dimension of such latent space that allows meaningful input reconstruction. Person-specific convolutional autoencoders are designed by manipulating the size of their latent space. A sliding window technique with overlapping is employed to segment varied-sized windows. Five topographic head-maps are formed in the frequency domain for each window. The latent space of autoencoders is assessed using the input reconstruction capacity and classification utility. Findings indicate that the minimal latent space dimension is 25% of the size of the topographic maps for achieving maximum reconstruction capacity and maximizing classification accuracy, which is achieved with a window length of at least 1 s and a shift of 125 ms, using the 128 Hz sampling rate. This research contributes to the body of knowledge with an architectural pipeline for eliminating redundant EEG data while preserving relevant features with deep autoencoders

    Examining the Size of the Latent Space of Convolutional Variational Autoencoders Trained With Spectral Topographic Maps of EEG Frequency Bands

    Get PDF
    Electroencephalography (EEG) is a technique of recording brain electrical potentials using electrodes placed on the scalp [1]. It is well known that EEG signals contain essential information in the frequency, temporal and spatial domains. For example, some studies have converted EEG signals into topographic power head maps to preserve spatial information [2]. Others have produced spectral topographic head maps of different EEG bands to both preserve information in The associate editor coordinating the review of this manuscript and approving it for publication was Ludovico Minati . the spatial domain and take advantage of the information in the frequency domain [3]. However, topographic maps contain highly interpolated data in between electrode locations and are often redundant. For this reason, convolutional neural networks are often used to reduce their dimensionality and learn relevant features automatically [4]

    Machine Learning Models for High-dimensional Biomedical Data

    Get PDF
    abstract: The recent technological advances enable the collection of various complex, heterogeneous and high-dimensional data in biomedical domains. The increasing availability of the high-dimensional biomedical data creates the needs of new machine learning models for effective data analysis and knowledge discovery. This dissertation introduces several unsupervised and supervised methods to help understand the data, discover the patterns and improve the decision making. All the proposed methods can generalize to other industrial fields. The first topic of this dissertation focuses on the data clustering. Data clustering is often the first step for analyzing a dataset without the label information. Clustering high-dimensional data with mixed categorical and numeric attributes remains a challenging, yet important task. A clustering algorithm based on tree ensembles, CRAFTER, is proposed to tackle this task in a scalable manner. The second part of this dissertation aims to develop data representation methods for genome sequencing data, a special type of high-dimensional data in the biomedical domain. The proposed data representation method, Bag-of-Segments, can summarize the key characteristics of the genome sequence into a small number of features with good interpretability. The third part of this dissertation introduces an end-to-end deep neural network model, GCRNN, for time series classification with emphasis on both the accuracy and the interpretation. GCRNN contains a convolutional network component to extract high-level features, and a recurrent network component to enhance the modeling of the temporal characteristics. A feed-forward fully connected network with the sparse group lasso regularization is used to generate the final classification and provide good interpretability. The last topic centers around the dimensionality reduction methods for time series data. A good dimensionality reduction method is important for the storage, decision making and pattern visualization for time series data. The CRNN autoencoder is proposed to not only achieve low reconstruction error, but also generate discriminative features. A variational version of this autoencoder has great potential for applications such as anomaly detection and process control.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Robust Epileptic Seizure Detection Using Long Short-Term Memory and Feature Fusion of Compressed Time–Frequency EEG Images

    Get PDF
    Epilepsy is a prevalent neurological disorder with considerable risks, including physical impairment and irreversible brain damage from seizures. Given these challenges, the urgency for prompt and accurate seizure detection cannot be overstated. Traditionally, experts have relied on manual EEG signal analyses for seizure detection, which is labor-intensive and prone to human error. Recognizing this limitation, the rise in deep learning methods has been heralded as a promising avenue, offering more refined diagnostic precision. On the other hand, the prevailing challenge in many models is their constrained emphasis on specific domains, potentially diminishing their robustness and precision in complex real-world environments. This paper presents a novel model that seamlessly integrates the salient features from the time–frequency domain along with pivotal statistical attributes derived from EEG signals. This fusion process involves the integration of essential statistics, including the mean, median, and variance, combined with the rich data from compressed time–frequency (CWT) images processed using autoencoders. This multidimensional feature set provides a robust foundation for subsequent analytic steps. A long short-term memory (LSTM) network, meticulously optimized for the renowned Bonn Epilepsy dataset, was used to enhance the capability of the proposed model. Preliminary evaluations underscore the prowess of the proposed model: a remarkable 100% accuracy in most of the binary classifications, exceeding 95% accuracy in three-class and four-class challenges, and a commendable rate, exceeding 93.5% for the five-class classification
    • …
    corecore