9 research outputs found

    Deep learning for automated sleep monitoring

    Get PDF
    Wearable electroencephalography (EEG) is a technology that is revolutionising the longitudinal monitoring of neurological and mental disorders, improving the quality of life of patients and accelerating the relevant research. As sleep disorders and other conditions related to sleep quality affect a large part of the population, monitoring sleep at home, over extended periods of time could have significant impact on the quality of life of people who suffer from these conditions. Annotating the sleep architecture of patients, known as sleep stage scoring, is an expensive and time-consuming process that cannot scale to a large number of people. Using wearable EEG and automating sleep stage scoring is a potential solution to this problem. In this thesis, we propose and evaluate two deep learning algorithms for automated sleep stage scoring using a single channel of EEG. In our first method, we use time-frequency analysis for extracting features that closely follow the guidelines that human experts follow, combined with an ensemble of stacked sparse autoencoders as our classification algorithm. In our second method, we propose a convolutional neural network (CNN) architecture for automatically learning filters that are specific to the problem of sleep stage scoring. We achieved state-of-the-art results (mean F1-score 84%; range 82-86%) with our first method and comparably good results with the second (mean F1-score 81%; range 79-83%). Both our methods effectively account for the skewed performance that is usually found in the literature due to sleep stage duration imbalance. We propose a filter analysis and visualisation methodology for CNNs to understand the filters that CNNs learn. Our results indicate that our CNN was able to robustly learn filters that closely follow the sleep scoring guidelines.Open Acces

    Automated Sleep Scoring, Deep Learning and Physician Supervision

    Get PDF
    Sleep plays a crucial role in human well-being. Polysomnography is used in sleep medicine as a diagnostic tool, so as to objectively analyze the quality of sleep. Sleep scoring is the procedure of extracting sleep cycle information from the whole-night electrophysiological signals. The scoring is done worldwide by the sleep physicians according to the official American Academy of Sleep Medicine (AASM) scoring manual. In the last decades, a wide variety of deep learning based algorithms have been proposed to automatise the sleep scoring task. In this thesis we study the reasons why these algorithms fail to be introduced in the daily clinical routine, with the perspective of bridging the existing gap between the automatic sleep scoring models and the sleep physicians. In this light, the primary step is the design of a simplified sleep scoring architecture, also providing an estimate of the model uncertainty. Beside achieving results on par with most up-to-date scoring systems, we demonstrate the efficiency of ensemble learning based algorithms, together with label smoothing techniques, in both enhancing the performance and calibrating the simplified scoring model. We introduced an uncertainty estimate procedure, so as to identify the most challenging sleep stage predictions, and to quantify the disagreement between the predictions given by the model and the annotation given by the physicians. In this thesis we also propose a novel method to integrate the inter-scorer variability into the training procedure of a sleep scoring model. We clearly show that a deep learning model is able to encode this variability, so as to better adapt to the consensus of a group of scorers-physicians. We finally address the generalization ability of a deep learning based sleep scoring system, further studying its resilience to the sleep complexity and to the AASM scoring rules. We can state that there is no need to train the algorithm strictly following the AASM guidelines. Most importantly, using data from multiple data centers results in a better performing model compared with training on a single data cohort. The variability among different scorers and data centers needs to be taken into account, more than the variability among sleep disorders

    Extended Abstracts

    Get PDF
    Presented at the 21st International Conference on Auditory Display (ICAD2015), July 6-10, 2015, Graz, Styria, Austria.Mark Ballora “Two examples of sonification for viewer engagement: Hurricanes and squirrel hibernation cycles” / Stephen Barrass, “ Diagnostic Singing Bowls” / Natasha Barrett, Kristian Nymoen. “Investigations in coarticulated performance gestures using interactive parameter-mapping 3D sonification” / Lapo Boschi, Arthur Paté, Benjamin Holtzman, Jean-Loïc le Carrou. “Can auditory display help us categorize seismic signals?” / Cédric Camier, François-Xavier Féron, Julien Boissinot, Catherine Guastavino. “Tracking moving sounds: Perception of spatial figures” / Coralie Diatkine, Stéphanie Bertet, Miguel Ortiz. “Towards the holistic spatialization of multiple sound sources in 3D, implementation using ambisonics to binaural technique” / S. Maryam FakhrHosseini, Paul Kirby, Myounghoon Jeon. “Regulating Drivers’ Aggressiveness by Sonifying Emotional Data” / Wolfgang Hauer, Katharina Vogt. “Sonification of a streaming-server logfile” / Thomas Hermann, Tobias Hildebrandt, Patrick Langeslag, Stefanie Rinderle-Ma. “Optimizing aesthetics and precision in sonification for peripheral process-monitoring” / Minna Huotilainen, Matti Gröhn, Iikka Yli-Kyyny, Jussi Virkkala, Tiina Paunio. “Sleep Enhancement by Sound Stimulation” / Steven Landry, Jayde Croschere, Myounghoon Jeon. “Subjective Assessment of In-Vehicle Auditory Warnings for Rail Grade Crossings” / Rick McIlraith, Paul Walton, Jude Brereton. “The Spatialised Sonification of Drug-Enzyme Interactions” / George Mihalas, Minodora Andor, Sorin Paralescu, Anca Tudor, Adrian Neagu, Lucian Popescu, Antoanela Naaji. “Adding Sound to Medical Data Representation” / Rainer Mittmannsgruber, Katharina Vogt. “Auditory assistance for timing presentations” / Joseph W. Newbold, Andy Hunt, Jude Brereton. “Chemical Spectral Analysis through Sonification” / S. Camille Peres, Daniel Verona, Paul Ritchey. “The Effects of Various Parameter Combinations in Parameter-Mapping Sonifications: A Pilot Study” / Eva Sjuve. “Metopia: Experiencing Complex Environmental Data Through Sound” / Benjamin Stahl, Katharina Vogt. “The Effect of Audiovisual Congruency on Short-Term Memory of Serial Spatial Stimuli: A Pilot Test” / David Worrall. “Realtime sonification and visualisation of network metadata (The NetSon Project)” / Bernhard Zeller, Katharina Vogt. “Auditory graph evolution by the example of spurious correlations” /The compiled collection of extended abstracts included in the ICAD 2015 Proceedings. Extended abstracts include, but are not limited to, late-breaking results, works in early stages of progress, novel methodologies, unique or controversial theoretical positions, and discussions of unsuccessful research or null findings
    corecore