29 research outputs found
Towards a Flexible Deep Learning Method for Automatic Detection of Clinically Relevant Multi-Modal Events in the Polysomnogram
Much attention has been given to automatic sleep staging algorithms in past
years, but the detection of discrete events in sleep studies is also crucial
for precise characterization of sleep patterns and possible diagnosis of sleep
disorders. We propose here a deep learning model for automatic detection and
annotation of arousals and leg movements. Both of these are commonly seen
during normal sleep, while an excessive amount of either is linked to disrupted
sleep patterns, excessive daytime sleepiness impacting quality of life, and
various sleep disorders. Our model was trained on 1,485 subjects and tested on
1,000 separate recordings of sleep. We tested two different experimental setups
and found optimal arousal detection was attained by including a recurrent
neural network module in our default model with a dynamic default event window
(F1 = 0.75), while optimal leg movement detection was attained using a static
event window (F1 = 0.65). Our work show promise while still allowing for
improvements. Specifically, future research will explore the proposed model as
a general-purpose sleep analysis model.Comment: Accepted for publication in 41st International Engineering in
Medicine and Biology Conference (EMBC), July 23-27, 201
Apprentissage à partir de séries temporelles d'électrophysiologie pendant le sommeil : de l'annotation manuelle à la détection automatique d'évènements
Le sommeil est un phénomène biologique universel complexe et encore peu compris. La méthode de référence actuelle pour caractériser les états de vigilance au cours du sommeil est la polysomnographie (PSG) qui enregistre de manière non invasive à la surface de la peau, les modifications électrophysiologiques de l’activité cérébrale (électroencéphalographie, EEG), oculaire (électro-oculographie, EOG) et musculaire (électromyographie, EMG). Traditionnellement, les signaux électrophysiologiques sont ensuite analysés par un expert du sommeil qui annote manuellement les évènements d’intérêt comme les stades de sommeil ou certains micro-évènements (grapho éléments EEG). Toutefois, l’annotation manuelle est chronophage et sujette à la subjectivité de l’expert. De plus, le développement exponentiel d’outils de monitoring du sommeil enregistrant et analysant automatiquement les signaux électrophysiologiques tels que le bandeau Dreem rend nécessaire une automatisation de ces tâches.L’apprentissage machine bénéficie d’une attention croissante car il permet d’apprendre à un ordinateur à réaliser certaines tâches de décision à partir d’un ensemble d’exemples d’apprentissage et d’obtenir des performances de prédictions plus élevées qu’avec les méthodes classiques. Les avancées techniques dans le domaine de l’apprentissage profond ont ouvert de nouvelles perspectives pour la science du sommeil tout en soulevant de nouveaux défis techniques. L’entraînement des algorithmes d’apprentissage profond nécessite une grande quantité de données annotées qui n’est pas nécessairement disponible pour les données PSG. De plus, les algorithmes d’apprentissage sont très sensibles à la variabilité des données qui est non négligeable en ce qui concerne ces données. Cela s’explique par la variabilité intra et inter-sujet (pathologies / sujets sains, âge…).Cette thèse étudie le développement d’algorithmes d’apprentissage profond afin de réaliser deux types de tâches: la prédiction des stades de sommeil et la détection de micro-événements. Une attention particulière est portée (a) sur la quantité de données annotées requise pour l’entraînement des algorithmes proposés et (b) sur la sensibilité de ces algorithmes à la variabilité des données. Des stratégies spécifiques, basées sur l’apprentissage par transfert, sont proposées pour résoudre les problèmes techniques dus au manque de données annotées et à la variabilité des données.Sleep is a complex and not fully understood biological phenomenon. The traditional process to monitor sleep relies on the polysomnography exam (PSG). It records, in a non invasive fashion at the level of the skin, electrophysiological modifications of the brain activity (electroencephalography, EEG), ocular (electro-oculography, EOG) and muscular (electro-myography, EMG). The recorded signals are then analyzed by a sleep expert who manually annotates the events of interest such as the sleep stages or some micro-events. However, manual labeling is time-consuming and prone to the expert subjectivity. Furthermore, the development of sleep monitoring consumer wearable devices which record and process automatically electrophysiological signals, such as Dreem headband, requires to automate some labeling tasks.Machine learning (ML) has received much attention as a way to teach a computer to perform some decision tasks automatically from a set of learning examples. Furthermore, the rise of deep learning (DL) algorithms in several fields have opened new perspectives for sleep sciences. On the other hand, this is also raising new concerns related to the scarcity of labeled data that may prevent their training processes and the variability of data that may hurt their performances. Indeed, sleep data is scarce due to the labeling burden and exhibits also some intra and inter-subject variability (due to sleep disorders, aging...).This thesis has investigated and proposed ML algorithms to automate the detection of sleep related events from raw PSG time series. Through the prism of DL, it addressed two main tasks: sleep stage classification and micro-event detection. A particular attention was brought (a) to the quantity of labeled data required to train such algorithms and (b) to the generalization performances of these algorithms to new (variable) data. Specific strategies, based on transfer learning, were designed to cope with the issues related to the scarcity of labeled data and the variability of data
Domain adaptation with optimal transport improves EEG sleep stage classifiers
International audience—Low sample size and the absence of labels on certain data limits the performances of predictive algorithms. To overcome this problem, it is sometimes possible to learn a model on a large labeled auxiliary dataset. Yet, this assumes that the two datasets exhibit similar statistical properties which is rarely the case in practice: there is a discrepancy between the large dataset, called the source, and the dataset of interest, called the target. Improving the prediction performance on the target domain by reducing the distribution discrepancy, between the source and the target domains, is known as Domain Adaptation (DA). Presently, Optimal transport DA (OTDA) methods yield state-of-the-art performances on several DA problems. In this paper, we consider the problem of sleep stage classification, and use OTDA to improve the performances of a convolutional neural network. We use features learnt from the electroencephalogram (EEG) and the electrooculogram (EOG) signals. Our results demonstrate that the method significantly improves the network predictions on the target data
A deep learning architecture to detect events in EEG signals during sleep
International audienceElectroencephalography (EEG) during sleep is used by clinicians to evaluate various neurological disorders. In sleep medicine, it is relevant to detect macro-events (≥ 10 s) such as sleep stages, and micro-events (≤ 2 s) such as spindles and K-complexes. Annotations of such events require a trained sleep expert, a time consuming and tedious process with a large inter-scorer variability. Automatic algorithms have been developed to detect various types of events but these are event-specific. We propose a deep learning method that jointly predicts locations, durations and types of events in EEG time series. It relies on a convolutional neural network that builds a feature representation from raw EEG signals. Numerical experiments demonstrate efficiency of this new approach on various event detection tasks compared to current state-of-the-art, event specific, algorithms
A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series
International audienceSleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of signal a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEG), electrooculograms (EOG), electrocardiograms (ECG) and electromyograms (EMG). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting hand-crafted features, that exploits all multivariate and multimodal Polysomnography (PSG) signals (EEG, EMG and EOG), and that can exploit the temporal context of each 30 s window of data. For each modality the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields state-of-the-art performance. Our study reveals a number of insights on the spatio-temporal distribution of the signal of interest: a good trade-off for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting one minute of data before and after each data segment offers the strongest improvement when a limited number of channels is available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver state-of-the-art classification performance with a small computational cost
Performance of an Ambulatory Dry-EEG Device for Auditory Closed-Loop Stimulation of Sleep Slow Oscillations in the Home Environment
Recent research has shown that auditory closed-loop stimulation can enhance sleep slow oscillations (SO) to improve N3 sleep quality and cognition. Previous studies have been conducted in lab environments. The present study aimed to validate and assess the performance of a novel ambulatory wireless dry-EEG device (WDD), for auditory closed-loop stimulation of SO during N3 sleep at home. The performance of the WDD to detect N3 sleep automatically and to send auditory closed-loop stimulation on SO were tested on 20 young healthy subjects who slept with both the WDD and a miniaturized polysomnography (part 1) in both stimulated and sham nights within a double blind, randomized and crossover design. The effects of auditory closed-loop stimulation on delta power increase were assessed after one and 10 nights of stimulation on an observational pilot study in the home environment including 90 middle-aged subjects (part 2).The first part, aimed at assessing the quality of the WDD as compared to a polysomnograph, showed that the sensitivity and specificity to automatically detect N3 sleep in real-time were 0.70 and 0.90, respectively. The stimulation accuracy of the SO ascending-phase targeting was 45 ± 52°. The second part of the study, conducted in the home environment, showed that the stimulation protocol induced an increase of 43.9% of delta power in the 4 s window following the first stimulation (including evoked potentials and SO entrainment effect). The increase of SO response to auditory stimulation remained at the same level after 10 consecutive nights. The WDD shows good performances to automatically detect in real-time N3 sleep and to send auditory closed-loop stimulation on SO accurately. These stimulation increased the SO amplitude during N3 sleep without any adaptation effect after 10 consecutive nights. This tool provides new perspectives to figure out novel sleep EEG biomarkers in longitudinal studies and can be interesting to conduct broad studies on the effects of auditory stimulation during sleep
DOSED: A deep learning approach to detect multiple sleep micro-events in EEG signal
International audienceBackground: Electroencephalography (EEG) monitors brain activity during sleep and is used to identify sleep disorders. In sleep medicine, clinicians interpret raw EEG signals in so-called sleep stages, which are assigned by experts to every 30 s window of signal. For diagnosis, they also rely on shorter prototypical micro-architecture events which exhibit variable durations and shapes, such as spindles, K-complexes or arousals. Annotating such events is traditionally performed by a trained sleep expert, making the process time consuming, tedious and subject to inter-scorer variability. To automate this procedure, various methods have been developed, yet these are event-specific and rely on the extraction of hand-crafted features.New method: We propose a novel deep learning architecure called Dreem One Shot Event Detector (DOSED). DOSED jointly predicts locations, durations and types of events in EEG time series. The proposed approach, applied here on sleep related micro-architecture events, is inspired by object detectors developed for computer vision such as YOLO and SSD. It relies on a convolutional neural network that builds a feature representation from raw EEG signals, as well as two modules performing localization and classification respectively.Results and comparison with other methods: The proposed approach is tested on 4 datasets and 3 types of events (spindles, K-complexes, arousals) and compared to the current state-of-the-art detection algorithms.Conclusions: Results demonstrate the versatility of this new approach and improved performance compared to the current state-of-the-art detection method
Towards a Flexible Deep Learning Method for Automatic Detection of Clinically Relevant Multi-Modal Events in the Polysomnogram
To study coordination in complex social systems such as financial markets, the authors introduce a new prediction market set-up that accounts for fundamental uncertainty. Nonetheless, the market is designed so that its total value is known, and thus its rationality can be evaluated. In two experiments, the authors observe that quick consensus emerges early yielding pronounced mispricing, which however does not show the standard "bubble-and-crash". The set-up is implemented within the xYotta collaborative platform (https://xyotta.com). xYotta's functionality offers a large number of extensions of various complexity such as running several parallel markets with the same or different users, as well as collaborative project development in which projects undergo the equivalent of an IPO (initial public offering) and whose subsequent trading matches the role of financial markets in determining value. xYotta is thus offered to researchers as an open source software for the broad investigation of complex systems with human participants.ISSN:1864-604