1,032 research outputs found
Automatic Recognition, Segmentation, and Sex Assignment of Nocturnal Asthmatic Coughs and Cough Epochs in Smartphone Audio Recordings: Observational Field Study
Background: Asthma is one of the most prevalent chronic respiratory diseases. Despite increased investment in treatment, little progress has been made in the early recognition and treatment of asthma exacerbations over the last decade. Nocturnal cough monitoring may provide an opportunity to identify patients at risk for imminent exacerbations. Recently developed approaches enable smartphone-based cough monitoring. These approaches, however, have not undergone longitudinal overnight testing nor have they been specifically evaluated in the context of asthma. Also, the problem of distinguishing partner coughs from patient coughs when two or more people are sleeping in the same room using contact-free audio recordings remains unsolved.
Objective: The objective of this study was to evaluate the automatic recognition and segmentation of nocturnal asthmatic coughs and cough epochs in smartphone-based audio recordings that were collected in the field. We also aimed to distinguish partner coughs from patient coughs in contact-free audio recordings by classifying coughs based on sex.
Methods: We used a convolutional neural network model that we had developed in previous work for automated cough recognition. We further used techniques (such as ensemble learning, minibatch balancing, and thresholding) to address the imbalance in the data set. We evaluated the classifier in a classification task and a segmentation task. The cough-recognition classifier served as the basis for the cough-segmentation classifier from continuous audio recordings. We compared automated cough and cough-epoch counts to human-annotated cough and cough-epoch counts. We employed Gaussian mixture models to build a classifier for cough and cough-epoch signals based on sex.
Results: We recorded audio data from 94 adults with asthma (overall: mean 43 years; SD 16 years; female: 54/94, 57%; male 40/94, 43%). Audio data were recorded by each participant in their everyday environment using a smartphone placed next to their bed; recordings were made over a period of 28 nights. Out of 704,697 sounds, we identified 30,304 sounds as coughs. A total of 26,166 coughs occurred without a 2-second pause between coughs, yielding 8238 cough epochs. The ensemble classifier performed well with a Matthews correlation coefficient of 92% in a pure classification task and achieved comparable cough counts to that of human annotators in the segmentation of coughing. The count difference between automated and human-annotated coughs was a mean –0.1 (95% CI –12.11, 11.91) coughs. The count difference between automated and human-annotated cough epochs was a mean 0.24 (95% CI –3.67, 4.15) cough epochs. The Gaussian mixture model cough epoch–based sex classification performed best yielding an accuracy of 83%.
Conclusions: Our study showed longitudinal nocturnal cough and cough-epoch recognition from nightly recorded smartphone-based audio from adults with asthma. The model distinguishes partner cough from patient cough in contact-free recordings by identifying cough and cough-epoch signals that correspond to the sex of the patient. This research represents a step towards enabling passive and scalable cough monitoring for adults with asthma
Analyzing Cough Sounds for the Evidence of Covid-19 using Deep Learning Models
Early detection of infectious disease is the must to prevent/avoid multiple infections, and Covid-19 is an example. When dealing with Covid-19 pandemic, Cough is still ubiquitously presented as one of the key symptoms in both severe and non-severe Covid-19 infections, even though symptoms appear differently in different sociodemographic categories. By realizing the importance of clinical studies, analyzing cough sounds using AI-driven tools could help add more values when it comes to decision-making. Moreover, for mass screening and to serve resource constrained regions, AI-driven tools are the must. In this thesis, Convolutional Neural Network (CNN) tailored deep learning models are studied to analyze cough sounds to detect the possible evidence of Covid-19. In addition to custom CNN, pre-trained deep learning models (e.g., Vgg-16, Resnet-50, MobileNetV1, and DenseNet121) are employed on a publicly available dataset. In our findings, custom CNN performed comparatively better than pre-trained deep learning models
Towards the Design of a Smartphone-Based Biofeedback Breathing Training: Indentifying Diaphragmatic Breathing Patterns From a Smartphones\u27 Microphone
Asthma, diabetes, hypertension, or major depression are non-communicable diseases (NCDs) and impose a major burden on global health. Stress is linked to both the causes and consequences of NCDs and it has been shown that biofeedback-based breathing trainings (BBTs) are effective in coping with stress. Here, diaphragmatic breathing, i.e. deep abdominal breathing, belongs to the most distinguished breathing techniques. However, high costs and low scalability of state-of-the-art BBTs that require expensive medical hardware and health professionals, represent a significant barrier for their widespread adoption. Health information technology has the potential to address this important practical problem. Particularly, it has been shown that a smartphone microphone has the ability to record audio signals from exhalation in a quality that can be compared to professional respiratory devices. As this finding is highly relevant for low-cost and scalable smartphone-based BBTs (SBBT) and – to the best of our knowledge - because it has not been investigated so far, we aim to design and evaluate the efficacy of such a SBBT. As a very first step, we apply design-science research and investigate in this research-in-progress the relationship of diaphragmatic breathing and its acoustic components by just using a smartphone’s microphone. For that purpose, we review related work and develop our hypotheses based on justificatory knowledge from physiology, physics and acoustics. We finally describe a laboratory study that is used to test our hypotheses. We conclude with a brief outlook on future work
Towards using Cough for Respiratory Disease Diagnosis by leveraging Artificial Intelligence: A Survey
Cough acoustics contain multitudes of vital information about
pathomorphological alterations in the respiratory system. Reliable and accurate
detection of cough events by investigating the underlying cough latent features
and disease diagnosis can play an indispensable role in revitalizing the
healthcare practices. The recent application of Artificial Intelligence (AI)
and advances of ubiquitous computing for respiratory disease prediction has
created an auspicious trend and myriad of future possibilities in the medical
domain. In particular, there is an expeditiously emerging trend of Machine
learning (ML) and Deep Learning (DL)-based diagnostic algorithms exploiting
cough signatures. The enormous body of literature on cough-based AI algorithms
demonstrate that these models can play a significant role for detecting the
onset of a specific respiratory disease. However, it is pertinent to collect
the information from all relevant studies in an exhaustive manner for the
medical experts and AI scientists to analyze the decisive role of AI/ML. This
survey offers a comprehensive overview of the cough data-driven ML/DL detection
and preliminary diagnosis frameworks, along with a detailed list of significant
features. We investigate the mechanism that causes cough and the latent cough
features of the respiratory modalities. We also analyze the customized cough
monitoring application, and their AI-powered recognition algorithms. Challenges
and prospective future research directions to develop practical, robust, and
ubiquitous solutions are also discussed in detail.Comment: 30 pages, 12 figures, 9 table
SpiroMask: Measuring Lung Function Using Consumer-Grade Masks
According to the World Health Organisation (WHO), 235 million people suffer
from respiratory illnesses and four million people die annually due to air
pollution. Regular lung health monitoring can lead to prognoses about
deteriorating lung health conditions. This paper presents our system SpiroMask
that retrofits a microphone in consumer-grade masks (N95 and cloth masks) for
continuous lung health monitoring. We evaluate our approach on 48 participants
(including 14 with lung health issues) and find that we can estimate parameters
such as lung volume and respiration rate within the approved error range by the
American Thoracic Society (ATS). Further, we show that our approach is robust
to sensor placement inside the mask.Comment: Accepted in the ACM Transactions on Computing for Healthcare (HEALTH
Federated Few-shot Learning for Cough Classification with Edge Devices
Automatically classifying cough sounds is one of the most critical tasks for
the diagnosis and treatment of respiratory diseases. However, collecting a huge
amount of labeled cough dataset is challenging mainly due to high laborious
expenses, data scarcity, and privacy concerns. In this work, our aim is to
develop a framework that can effectively perform cough classification even in
situations when enormous cough data is not available, while also addressing
privacy concerns. Specifically, we formulate a new problem to tackle these
challenges and adopt few-shot learning and federated learning to design a novel
framework, termed F2LCough, for solving the newly formulated problem. We
illustrate the superiority of our method compared with other approaches on
COVID-19 Thermal Face & Cough dataset, in which F2LCough achieves an average
F1-Score of 86%. Our results show the feasibility of few-shot learning combined
with federated learning to build a classification model of cough sounds. This
new methodology is able to classify cough sounds in data-scarce situations and
maintain privacy properties. The outcomes of this work can be a fundamental
framework for building support systems for the detection and diagnosis of
cough-related diseases.Comment: 21 pages, 5 figure
Portable spirometer using pressure-volume method with Bluetooth integration to Android smartphone
This paper presents a study on an embedded spirometer using the low-cost MPX5100DP pressure sensor and an Arduino Uno board to measure the air exhaled flow rate and calculate force vital capacity (FVC), forced expiratory volume in 1 s (FEV1), and the FEV1/FVC ratio of human lungs volume. The exhaled air flow rate was measured from differential pressure in the sections of a mouthpiece tube using the venturi effect equation. This constructed mouthpiece and the embedded spirometer resulted in a 96.27% FVC reading accuracy with a deviation of 0.09 L and 98.05% FEV1 accuracy with a deviation of 0.05 L compared to spirometry. This spirometer integrates an HC-05 Bluetooth module for spirometry data transceiving to a smartphone for display and recording in an Android application for further chronic obstructive pulmonary disease (COPD) diagnosis
- …