52 research outputs found

    Deep learning features for robust detection of acoustic events in sleep-disordered breathing

    Get PDF
    Sleep-disordered breathing (SDB) is a serious and prevalent condition, and acoustic analysis via consumer devices (e.g. smartphones) offers a low-cost solution to screening for it. We present a novel approach for the acoustic identification of SDB sounds, such as snoring, using bottleneck features learned from a corpus of whole-night sound recordings. Two types of bottleneck features are described, obtained by applying a deep autoencoder to the output of an auditory model or a short-term autocorrelation analysis. We investigate two architectures for snore sound detection: a tandem system and a hybrid system. In both cases, a `language model' (LM) was incorporated to exploit information about the sequence of different SDB events. Our results show that the proposed bottleneck features give better performance than conventional mel-frequency cepstral coefficients, and that the tandem system outperforms the hybrid system given the limited amount of labelled training data available. The LM made a small improvement to the performance of both classifiers

    Deep sleep: deep learning methods for the acoustic analysis of sleep-disordered breathing

    Get PDF
    Sleep-disordered breathing (SDB) is a serious and prevalent condition that results from the collapse of the upper airway during sleep, which leads to oxygen desaturations, unphysiological variations in intrathoracic pressure, and sleep fragmentation. Its most common form is obstructive sleep apnoea (OSA). This has a big impact on quality of life, and is associated with cardiovascular morbidity. Polysomnography, the gold standard for diagnosing SDB, is obtrusive, time-consuming and expensive. Alternative diagnostic approaches have been proposed to overcome its limitations. In particular, acoustic analysis of sleep breathing sounds offers an unobtrusive and inexpensive means to screen for SDB, since it displays symptoms with unique acoustic characteristics. These include snoring, loud gasps, chokes, and absence of breathing. This thesis investigates deep learning methods, which have revolutionised speech and audio technology, to robustly screen for SDB in typical sleep conditions using acoustics. To begin with, the desirable characteristics for an acoustic corpus of SDB, and the acoustic definition of snoring are considered to create corpora for this study. Then three approaches are developed to tackle increasingly complex scenarios. Firstly, with the aim of leveraging a large amount of unlabelled SDB data, unsupervised learning is applied to learn novel feature representations with deep neural networks for the classification of SDB events such as snoring. The incorporation of contextual information to assist the classifier in producing realistic event durations is investigated. Secondly, the temporal pattern of sleep breathing sounds is exploited using convolutional neural networks to screen participants sleeping by themselves for OSA. The integration of acoustic features with physiological data for screening is examined. Thirdly, for the purpose of achieving robustness to bed partner breathing sounds, recurrent neural networks are used to screen a subject and their bed partner for SDB in the same session. Experiments conducted on the constructed corpora show that the developed systems accurately classify SDB events, screen for OSA with high sensitivity and specificity, and screen a subject and their bed partner for SDB with encouraging performance. In conclusion, this thesis makes promising progress in improving access to SDB diagnosis through low-cost and non-invasive methods

    Sleep Breath

    Get PDF
    PurposeDiagnosis of obstructive sleep apnea by the gold-standard of polysomnography (PSG), or by home sleep testing (HST), requires numerous physical connections to the patient which may restrict use of these tools for early screening. We hypothesized that normal and disturbed breathing may be detected by a consumer smartphone without physical connections to the patient using novel algorithms to analyze ambient sound.MethodsWe studied 91 patients undergoing clinically indicated PSG. Phase I: In a derivation cohort (n = 32), we placed an unmodified Samsung Galaxy S5 without external microphone near the bed to record ambient sounds. We analyzed 12,352 discrete breath/non-breath sounds (386/patient), from which we developed algorithms to remove noise, and detect breaths as envelopes of spectral peaks. Phase II: In a distinct validation cohort (n = 59), we tested the ability of acoustic algorithms to detect AHI 15 on PSG.ResultsSmartphone-recorded sound analyses detected the presence, absence, and types of breath sound. Phase I: In the derivation cohort, spectral analysis identified breaths and apneas with a c-statistic of 0.91, and loud obstruction sounds with c-statistic of 0.95 on receiver operating characteristic analyses, relative to adjudicated events. Phase II: In the validation cohort, automated acoustic analysis provided a c-statistic of 0.87 compared to whole-night PSG.ConclusionsAmbient sounds recorded from a smartphone during sleep can identify apnea and abnormal breathing verified on PSG. Future studies should determine if this approach may facilitate early screening of SDB to identify at-risk patients for definitive diagnosis and therapy.Clinical trialsNCT03288376; clinicaltrials.orgR43 DP006418/DP/NCCDPHP CDC HHS/United States2019-05-24T00:00:00Z30022325PMC65341346307vault:3223

    Entropy analysis of acoustic signals recorded with a smartphone for detecting apneas and hypopneas: A comparison with a commercial system for home sleep apnea diagnosis

    Get PDF
    Obstructive sleep apnea (OSA) is a prevalent disease, but most patients remain undiagnosed and untreated. Here we propose analyzing smartphone audio signals for screening OSA patients at home. Our objectives were to: (1) develop an algorithm for detecting silence events and classifying them into apneas or hypopneas; (2) evaluate the performance of this system; and (3) compare the information provided with a type 3 portable sleep monitor, based mainly on nasal airflow. Overnight signals were acquired simultaneously by both systems in 13 subjects (3 healthy subjects and 10 OSA patients). The sample entropy of audio signals was used to identify apnea/hypopnea events. The apnea-hypopnea indices predicted by the two systems presented a very high degree of concordance and the smartphone correctly detected and stratified all the OSA patients. An event-by-event comparison demonstrated good agreement between silence events and apnea/hypopnea events in the reference system (Sensitivity = 76%, Positive Predictive Value = 82%). Most apneas were detected (89%), but not so many hypopneas (61%). We observed that many hypopneas were accompanied by snoring, so there was no sound reduction. The apnea/hypopnea classification accuracy was 70%, but most discrepancies resulted from the inability of the nasal cannula of the reference device to record oral breathing. We provided a spectral characterization of oral and nasal breathing to correct this effect, and the classification accuracy increased to 82%. This novel knowledge from acoustic signals may be of great interest for clinical practice to develop new non-invasive techniques for screening and monitoring OSA patients at homePeer ReviewedPostprint (published version

    Towards using Cough for Respiratory Disease Diagnosis by leveraging Artificial Intelligence: A Survey

    Full text link
    Cough acoustics contain multitudes of vital information about pathomorphological alterations in the respiratory system. Reliable and accurate detection of cough events by investigating the underlying cough latent features and disease diagnosis can play an indispensable role in revitalizing the healthcare practices. The recent application of Artificial Intelligence (AI) and advances of ubiquitous computing for respiratory disease prediction has created an auspicious trend and myriad of future possibilities in the medical domain. In particular, there is an expeditiously emerging trend of Machine learning (ML) and Deep Learning (DL)-based diagnostic algorithms exploiting cough signatures. The enormous body of literature on cough-based AI algorithms demonstrate that these models can play a significant role for detecting the onset of a specific respiratory disease. However, it is pertinent to collect the information from all relevant studies in an exhaustive manner for the medical experts and AI scientists to analyze the decisive role of AI/ML. This survey offers a comprehensive overview of the cough data-driven ML/DL detection and preliminary diagnosis frameworks, along with a detailed list of significant features. We investigate the mechanism that causes cough and the latent cough features of the respiratory modalities. We also analyze the customized cough monitoring application, and their AI-powered recognition algorithms. Challenges and prospective future research directions to develop practical, robust, and ubiquitous solutions are also discussed in detail.Comment: 30 pages, 12 figures, 9 table

    Audio signal analysis in combination with noncontact bio-motion data to successfully monitor snoring

    Get PDF
    This paper proposes a novel algorithm for automatic detection of snoring in sleep by combining non-contact bio-motion data with audio data. The audio data is captured using low end Android Smartphones in a non-clinical environment to mimic a possible user-friendly commercial product for sleep audio monitoring. However snore detection becomes a more challenging problem as the recorded signal has lower quality compared to those recorded in clinical environment. To have an accurate classification of snore/non-snore, we first compare a range of commonly used features extracted from the audio signal to find the best subject-independent features. Thereafter, bio-motion data is used to further improve the classification accuracy by identifying episodes which contain high amounts of body movements. High body movement indicates that the subject is turning, coughing or leaving the bed; during these instances snoring does not occur. The proposed algorithm is evaluated using the data recorded over 25 sessions from 7 healthy subjects who are suspected to be regular snorers. Our experimental results showed that the best subject-independent features for snore/non-snore classification are the energy of frequency band 3150-3650 Hz, zero crossing rate and 1st predictor coefficient of linear predictive coding. The proposed features yielded an average classification accuracy of 84.35%. The introduction of bio-motion data significantly improved the results by an average of 5.87% (p<;0.01). This work is the first study that successfully used bio-motion data to improve the accuracy of snore/non-snore classification

    Detection of sleep disordered breathing severity using acoustic biomarker and machine learning techniques

    Get PDF
    Purpose Breathing sounds during sleep are altered and characterized by various acoustic specificities in patients with sleep disordered breathing (SDB). This study aimed to identify acoustic biomarkers indicative of the severity of SDB by analyzing the breathing sounds collected from a large number of subjects during entire overnight sleep. Methods The participants were patients who presented at a sleep center with snoring or cessation of breathing during sleep. They were subjected to full-night polysomnography (PSG) during which the breathing sound was recorded using a microphone. Then, audio features were extracted and a group of features differing significantly between different SDB severity groups was selected as a potential acoustic biomarker. To assess the validity of the acoustic biomarker, classification tasks were performed using several machine learning techniques. Based on the apnea–hypopnea index of the subjects, four-group classification and binary classification were performed. Results Using tenfold cross validation, we achieved an accuracy of 88.3% in the four-group classification and an accuracy of 92.5% in the binary classification. Experimental evaluation demonstrated that the models trained on the proposed acoustic biomarkers can be used to estimate the severity of SDB. Conclusions Acoustic biomarkers may be useful to accurately predict the severity of SDB based on the patients breathing sounds during sleep, without conducting attended full-night PSG. This study implies that any device with a microphone, such as a smartphone, could be potentially utilized outside specialized facilities as a screening tool for detecting SDB.The work was partly supported by the SNUBH Grant #06-2014-157 and the Bio and Medical Technology Development Program of the National Research Foundation (NRF) funded by the Korean government, Ministry of Science, ICT & Future Planning (MSIP) (NRF-2015M3A9D7066972, NRF-2015M3A9D7066980)
    corecore