1,386 research outputs found

    Towards the Design of a Smartphone-Based Biofeedback Breathing Training: Indentifying Diaphragmatic Breathing Patterns From a Smartphones\u27 Microphone

    Get PDF
    Asthma, diabetes, hypertension, or major depression are non-communicable diseases (NCDs) and impose a major burden on global health. Stress is linked to both the causes and consequences of NCDs and it has been shown that biofeedback-based breathing trainings (BBTs) are effective in coping with stress. Here, diaphragmatic breathing, i.e. deep abdominal breathing, belongs to the most distinguished breathing techniques. However, high costs and low scalability of state-of-the-art BBTs that require expensive medical hardware and health professionals, represent a significant barrier for their widespread adoption. Health information technology has the potential to address this important practical problem. Particularly, it has been shown that a smartphone microphone has the ability to record audio signals from exhalation in a quality that can be compared to professional respiratory devices. As this finding is highly relevant for low-cost and scalable smartphone-based BBTs (SBBT) and – to the best of our knowledge - because it has not been investigated so far, we aim to design and evaluate the efficacy of such a SBBT. As a very first step, we apply design-science research and investigate in this research-in-progress the relationship of diaphragmatic breathing and its acoustic components by just using a smartphone’s microphone. For that purpose, we review related work and develop our hypotheses based on justificatory knowledge from physiology, physics and acoustics. We finally describe a laboratory study that is used to test our hypotheses. We conclude with a brief outlook on future work

    Identification of Respiratory Sounds Collected from Microphones Embedded in Mobile Phones

    Get PDF
    Sudden deterioration of condition in patients with various diseases, such as cardiopulmonary arrest, may result in poor outcome even after resuscitation. Early detection of deterioration is important in medical and long-term care settings, regardless of the acute or chronic phase of disease. Early detection and appropriate interventions are essential before resuscitating measures are required. Among the vital signs that indicate the general condition of a patient, respiratory rate has a greater ability to predict serious events such as thromboembolism and sepsis than heart rate and blood pressure, even in early stages. Despite its importance, however, respiratory rate is frequently overlooked and not measured, making it a neglected vital sign. To facilitate the measurement of respiratory rate, a non-invasive method of detecting respiratory sounds was developed based on deep learning technology, using a built-in microphone in a smartphone. Smartphones attached to the bed headboards of 20 participants undergoing polysomnography (PSG) at Kyoto University Hospital recorded respiratory sounds. Sound data were synchronized with overnight respiratory information. After excluding periods of abnormal breathing on the PSG report, sound data were processed for each 1-minute period. Expiration sound was determined using the pressure flow sensor signal on PSG. Finally, a model to identify the expiration section from the sound information was created using a deep learning algorithm from the convolutional Long Short Term Memory network. The accuracy of the learning model in identifying the expiratory section was 0.791, indicating that respiratory rate can be determined using the microphone in a smartphone. By collecting data from more patients and improving the accuracy of this method, respiratory rates could be more easily monitored in all situations, both inside and outside the hospital

    Towards using Cough for Respiratory Disease Diagnosis by leveraging Artificial Intelligence: A Survey

    Full text link
    Cough acoustics contain multitudes of vital information about pathomorphological alterations in the respiratory system. Reliable and accurate detection of cough events by investigating the underlying cough latent features and disease diagnosis can play an indispensable role in revitalizing the healthcare practices. The recent application of Artificial Intelligence (AI) and advances of ubiquitous computing for respiratory disease prediction has created an auspicious trend and myriad of future possibilities in the medical domain. In particular, there is an expeditiously emerging trend of Machine learning (ML) and Deep Learning (DL)-based diagnostic algorithms exploiting cough signatures. The enormous body of literature on cough-based AI algorithms demonstrate that these models can play a significant role for detecting the onset of a specific respiratory disease. However, it is pertinent to collect the information from all relevant studies in an exhaustive manner for the medical experts and AI scientists to analyze the decisive role of AI/ML. This survey offers a comprehensive overview of the cough data-driven ML/DL detection and preliminary diagnosis frameworks, along with a detailed list of significant features. We investigate the mechanism that causes cough and the latent cough features of the respiratory modalities. We also analyze the customized cough monitoring application, and their AI-powered recognition algorithms. Challenges and prospective future research directions to develop practical, robust, and ubiquitous solutions are also discussed in detail.Comment: 30 pages, 12 figures, 9 table

    Sleep Breath

    Get PDF
    PurposeDiagnosis of obstructive sleep apnea by the gold-standard of polysomnography (PSG), or by home sleep testing (HST), requires numerous physical connections to the patient which may restrict use of these tools for early screening. We hypothesized that normal and disturbed breathing may be detected by a consumer smartphone without physical connections to the patient using novel algorithms to analyze ambient sound.MethodsWe studied 91 patients undergoing clinically indicated PSG. Phase I: In a derivation cohort (n = 32), we placed an unmodified Samsung Galaxy S5 without external microphone near the bed to record ambient sounds. We analyzed 12,352 discrete breath/non-breath sounds (386/patient), from which we developed algorithms to remove noise, and detect breaths as envelopes of spectral peaks. Phase II: In a distinct validation cohort (n = 59), we tested the ability of acoustic algorithms to detect AHI 15 on PSG.ResultsSmartphone-recorded sound analyses detected the presence, absence, and types of breath sound. Phase I: In the derivation cohort, spectral analysis identified breaths and apneas with a c-statistic of 0.91, and loud obstruction sounds with c-statistic of 0.95 on receiver operating characteristic analyses, relative to adjudicated events. Phase II: In the validation cohort, automated acoustic analysis provided a c-statistic of 0.87 compared to whole-night PSG.ConclusionsAmbient sounds recorded from a smartphone during sleep can identify apnea and abnormal breathing verified on PSG. Future studies should determine if this approach may facilitate early screening of SDB to identify at-risk patients for definitive diagnosis and therapy.Clinical trialsNCT03288376; clinicaltrials.orgR43 DP006418/DP/NCCDPHP CDC HHS/United States2019-05-24T00:00:00Z30022325PMC65341346307vault:3223

    Analyzing Cough Sounds for the Evidence of Covid-19 using Deep Learning Models

    Get PDF
    Early detection of infectious disease is the must to prevent/avoid multiple infections, and Covid-19 is an example. When dealing with Covid-19 pandemic, Cough is still ubiquitously presented as one of the key symptoms in both severe and non-severe Covid-19 infections, even though symptoms appear differently in different sociodemographic categories. By realizing the importance of clinical studies, analyzing cough sounds using AI-driven tools could help add more values when it comes to decision-making. Moreover, for mass screening and to serve resource constrained regions, AI-driven tools are the must. In this thesis, Convolutional Neural Network (CNN) tailored deep learning models are studied to analyze cough sounds to detect the possible evidence of Covid-19. In addition to custom CNN, pre-trained deep learning models (e.g., Vgg-16, Resnet-50, MobileNetV1, and DenseNet121) are employed on a publicly available dataset. In our findings, custom CNN performed comparatively better than pre-trained deep learning models

    Deep Transfer Learning based COVID-19 Detection in Cough, Breath and Speech using Bottleneck Features

    Full text link
    We present an experimental investigation into the automatic detection of COVID-19 from coughs, breaths and speech as this type of screening is non-contact, does not require specialist medical expertise or laboratory facilities and can easily be deployed on inexpensive consumer hardware. Smartphone recordings of cough, breath and speech from subjects around the globe are used for classification by seven standard machine learning classifiers using leave-pp-out cross-validation to provide a promising baseline performance. Then, a diverse dataset of 10.29 hours of cough, sneeze, speech and noise audio recordings are used to pre-train a CNN, LSTM and Resnet50 classifier and fine tuned the model to enhance the performance even further. We have also extracted the bottleneck features from these pre-trained models by removing the final-two layers and used them as an input to the LR, SVM, MLP and KNN classifiers to detect COVID-19 signature. The highest AUC of 0.98 was achieved using a transfer learning based Resnet50 architecture on coughs from Coswara dataset. The highest AUC of 0.94 and 0.92 was achieved from an SVM run on the bottleneck features extracted from the breaths from Coswara dataset and speech recordings from ComParE dataset. We conclude that among all vocal audio, coughs carry the strongest COVID-19 signature followed by breath and speech and using transfer learning improves the classifier performance with higher AUC and lower variance across the cross-validation folds. Although these signatures are not perceivable by human ear, machine learning based COVID-19 detection is possible from vocal audio recorded via smartphone

    Novel Approach to Respiratory Rate Measurement Using Resonance Tube with Contradictory Thresholding Technique

    Get PDF
    In this paper, we propose a novel approach to respiratory rate measurement using resonance tube to enhance the performance of microphone inserted and fixed at the end of the tube to catch breath sound signal from the mouth and/or nose. The signal is amplified and passed into envelope detector circuit after which it is compared with a suitable reference voltage in comparator circuit to generate a pulse train of square wave synchronized with the respiratory cycle. A simple algorithm is developed in a small microcontroller to detect rising edges of each consecutive square wave to calculate respiratory rate together with analysis of breathing status. In order to evade noises which will cause errors and artifacts in the measuring system, the reference voltage is creatively designed to intelligently adapt itself to be low during expiration period and high during inspiration and pause period using the concept of resolving contradiction in the theory of inventive problem solving (TRIZ). This makes the developed device simple and low-cost with no need for complicated filtering system. It can detect breath sound as far as 250 cm from the nose and can perform accurately as tested against End Tidal CO2 Capnography device. The result shows that the developed device can estimate precisely from as low as 0 BrPM to as high as 98 BrPM and it can detect shallow breathing as low as 10 mV of breath sound

    Breathing Monitoring and Pattern Recognition with Wearable Sensors

    Get PDF
    This chapter introduces the anatomy and physiology of the respiratory system, and the reasons for measuring breathing events, particularly, using wearable sensors. Respiratory monitoring is vital including detection of sleep apnea and measurement of respiratory rate. The automatic detection of breathing patterns is equally important in other respiratory rehabilitation therapies, for example, magnetic resonance exams for respiratory triggered imaging, and synchronized functional electrical stimulation. In this context, the goal of many research groups is to create wearable devices able to monitor breathing activity continuously, under natural physiological conditions in different environments. Therefore, wearable sensors that have been used recently as well as the main signal processing methods for breathing analysis are discussed. The following sensor technologies are presented: acoustic, resistive, inductive, humidity, acceleration, pressure, electromyography, impedance, and infrared. New technologies open the door to future methods of noninvasive breathing analysis using wearable sensors associated with machine learning techniques for pattern detection
    corecore