162 research outputs found
Visualising Convolutional Neural Network Decisions in Automatic Sleep Scoring
Current sleep medicine relies on the supervised analysis of polysomnographic recordings, which comprise amongst others electroencephalogram (EEG), electromyogram (EMG), and electrooculogram (EOG) signals. Convolutional neural networks (CNN) provide an interesting framework for automated sleep classification, however, the lack of interpretability of its results has hampered CNN's further use in medicine. In this study, we train a CNN using as input Continuous Wavelet transformed EEG, EOG and EMG recordings from a publicly available dataset. The network achieved a 10-fold cross-validation Cohen's Kappa score of . Further, we provide insights on how this network classifies individual epochs of sleep using Guided Gradient-weighted Class Activation Maps (Guided Grad-CAM). The proposed approach is able to produce fine-grained activation maps on time-frequency domain for each signal providing a useful tool for identifying relevant features in CNNs
Automatic sleep staging of EEG signals: recent development, challenges, and future directions.
Modern deep learning holds a great potential to transform clinical studies of human sleep. Teaching a machine to carry out routine tasks would be a tremendous reduction in workload for clinicians. Sleep staging, a fundamental step in sleep practice, is a suitable task for this and will be the focus in this article. Recently, automatic sleep-staging systems have been trained to mimic manual scoring, leading to similar performance to human sleep experts, at least on scoring of healthy subjects. Despite tremendous progress, we have not seen automatic sleep scoring adopted widely in clinical environments. This review aims to provide the shared view of the authors on the most recent state-of-the-art developments in automatic sleep staging, the challenges that still need to be addressed, and the future directions needed for automatic sleep scoring to achieve clinical value
Towards More Accurate Automatic Sleep Staging via Deep Transfer Learning.
BACKGROUND: Despite recent significant progress in the development of automatic sleep staging methods, building a good model still remains a big challenge for sleep studies with a small cohort due to the data-variability and data-inefficiency issues. This work presents a deep transfer learning approach to overcome these issues and enable transferring knowledge from a large dataset to a small cohort for automatic sleep staging. METHODS: We start from a generic end-to-end deep learning framework for sequence-to-sequence sleep staging and derive two networks as the means for transfer learning. The networks are first trained in the source domain (i.e. the large database). The pretrained networks are then finetuned in the target domain (i.e. the small cohort) to complete knowledge transfer. We employ the Montreal Archive of Sleep Studies (MASS) database consisting of 200 subjects as the source domain and study deep transfer learning on three different target domains: the Sleep Cassette subset and the Sleep Telemetry subset of the Sleep-EDF Expanded database, and the Surrey-cEEGrid database. The target domains are purposely adopted to cover different degrees of data mismatch to the source domains. RESULTS: Our experimental results show significant performance improvement on automatic sleep staging on the target domains achieved with the proposed deep transfer learning approach. CONCLUSIONS: These results suggest the efficacy of the proposed approach in addressing the above-mentioned data-variability and data-inefficiency issues. SIGNIFICANCE: As a consequence, it would enable one to improve the quality of automatic sleep staging models when the amount of data is relatively small
STQS:Interpretable multi-modal Spatial-Temporal-seQuential model for automatic Sleep scoring
Sleep scoring is an important step for the detection of sleep disorders and usually performed by visual analysis. Since manual sleep scoring is time consuming, machine-learning based approaches have been proposed. Though efficient, these algorithms are black-box in nature and difficult to interpret by clinicians. In this paper, we propose a deep learning architecture for multi-modal sleep scoring, investigate the model's decision making process, and compare the model's reasoning with the annotation guidelines in the AASM manual. Our architecture, called STQS, uses convolutional neural networks (CNN) to automatically extract spatio-temporal features from 3 modalities (EEG, EOG and EMG), a bidirectional long short-term memory (Bi-LSTM) to extract sequential information, and residual connections to combine spatio-temporal and sequential features. We evaluated our model on two large datasets, obtaining an accuracy of 85% and 77% and a macro F1 score of 79% and 73% on SHHS and an in-house dataset, respectively. We further quantify the contribution of various architectural components and conclude that adding LSTM layers improves performance over a spatio-temporal CNN, while adding residual connections does not. Our interpretability results show that the output of the model is well aligned with AASM guidelines, and therefore, the model's decisions correspond to domain knowledge. We also compare multi-modal models and single-channel models and suggest that future research should focus on improving multi-modal models
Data-efficient Deep Learning Approach for Single-Channel EEG-Based Sleep Stage Classification with Model Interpretability
Sleep, a fundamental physiological process, occupies a significant portion of
our lives. Accurate classification of sleep stages serves as a crucial tool for
evaluating sleep quality and identifying probable sleep disorders. Our work
introduces a novel methodology that utilizes a SE-Resnet-Bi-LSTM architecture
to classify sleep into five separate stages. The classification process is
based on the analysis of single-channel electroencephalograms (EEGs). The
suggested framework consists of two fundamental elements: a feature extractor
that utilizes SE-ResNet, and a temporal context encoder that uses stacks of
Bi-LSTM units. The effectiveness of our approach is substantiated by thorough
assessments conducted on three different datasets, namely SleepEDF-20,
SleepEDF-78, and SHHS. The proposed methodology achieves significant model
performance, with Macro-F1 scores of 82.5, 78.9, and 81.9 for the respective
datasets. We employ 1D-GradCAM visualization as a methodology to elucidate the
decision-making process inherent in our model in the realm of sleep stage
classification. This visualization method not only provides valuable insights
into the model's classification rationale but also aligns its outcomes with the
annotations made by sleep experts. One notable feature of our research lies in
the incorporation of an efficient training approach, which adeptly upholds the
model's resilience in terms of performance. The experimental evaluations
provide a comprehensive evaluation of the effectiveness of our proposed model
in comparison to the existing approaches, highlighting its potential for
practical applications
Detection of REM Sleep Behaviour Disorder by Automated Polysomnography Analysis
Evidence suggests Rapid-Eye-Movement (REM) Sleep Behaviour Disorder (RBD) is
an early predictor of Parkinson's disease. This study proposes a
fully-automated framework for RBD detection consisting of automated sleep
staging followed by RBD identification. Analysis was assessed using a limited
polysomnography montage from 53 participants with RBD and 53 age-matched
healthy controls. Sleep stage classification was achieved using a Random Forest
(RF) classifier and 156 features extracted from electroencephalogram (EEG),
electrooculogram (EOG) and electromyogram (EMG) channels. For RBD detection, a
RF classifier was trained combining established techniques to quantify muscle
atonia with additional features that incorporate sleep architecture and the EMG
fractal exponent. Automated multi-state sleep staging achieved a 0.62 Cohen's
Kappa score. RBD detection accuracy improved by 10% to 96% (compared to
individual established metrics) when using manually annotated sleep staging.
Accuracy remained high (92%) when using automated sleep staging. This study
outperforms established metrics and demonstrates that incorporating sleep
architecture and sleep stage transitions can benefit RBD detection. This study
also achieved automated sleep staging with a level of accuracy comparable to
manual annotation. This study validates a tractable, fully-automated, and
sensitive pipeline for RBD identification that could be translated to wearable
take-home technology.Comment: 20 pages, 3 figure
Exploration of Sleep Events in the Latent Space of Variational Autoencoders on a Breath-by-Breath Basis
In this exploratory paper, we attempt to address a growing demand for unsupervised machine learning techniques on sleep data by applying a variational autoencoder on respiratory sleep data on a breath-by-breath basis. We transform respiratory signals into a latent representation and cluster them together using KMeans clustering. We calculate the cluster preference of scored events and attempt to explain their position in the latent space. We show that a variational autoencoder can accurately reconstruct three respiratory signals from individual breaths despite being sampled through a latent dimension 384 times smaller than the input data. Our results also indicate that respiratory events in particular show a tendency to cluster together in the latent space despite a purely unsupervised learning approach. Finally, we lay the groundwork for future work made possible in this paper
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
- …