614 research outputs found
Respiratory Sound Analysis for the Evidence of Lung Health
Significant changes have been made on audio-based technologies over years in several different fields along with healthcare industry. Analysis of Lung sounds is a potential source of noninvasive, quantitative information along with additional objective on the status of the pulmonary system. To do that medical professionals listen to sounds heard over the chest wall at different positions with a stethoscope which is known as auscultation and is important in diagnosing respiratory diseases. At times, possibility of inaccurate interpretation of respiratory sounds happens because of clinician’s lack of considerable expertise or sometimes trainees such as interns and residents misidentify respiratory sounds. We have built a tool to distinguish healthy respiratory sound from non-healthy ones that come from respiratory infection carrying patients. The audio clips were characterized using Linear Predictive Cepstral Coefficient (LPCC)-based features and the highest possible accuracy of 99.22% was obtained with a Multi-Layer Perceptron (MLP)- based classifier on the publicly available ICBHI17 respiratory sounds dataset [1] of size 6800+ clips. The system also outperformed established works in literature and other machine learning techniques. In future we will try to use larger dataset with other acoustic techniques along with deep learning-based approaches and try to identify the nature and severity of infection using respiratory sounds
Evolution of the Stethoscope: Advances with the Adoption of Machine Learning and Development of Wearable Devices
The stethoscope has long been used for the examination of patients, but the importance of auscultation has declined due to its several limitations and the development of other diagnostic tools. However, auscultation is still recognized as a primary diagnostic device because it is non-invasive and provides valuable information in real-time. To supplement the limitations of existing stethoscopes, digital stethoscopes with machine learning (ML) algorithms have been developed. Thus, now we can record and share respiratory sounds and artificial intelligence (AI)-assisted auscultation using ML algorithms distinguishes the type of sounds. Recently, the demands for remote care and non-face-to-face treatment diseases requiring isolation such as coronavirus disease 2019 (COVID-19) infection increased. To address these problems, wireless and wearable stethoscopes are being developed with the advances in battery technology and integrated sensors. This review provides the history of the stethoscope and classification of respiratory sounds, describes ML algorithms, and introduces new auscultation methods based on AI-assisted analysis and wireless or wearable stethoscopes
Benchmarking of eight recurrent neural network variants for breath phase and adventitious sound detection on a self-developed open-access lung sound database-HF_Lung_V1
A reliable, remote, and continuous real-time respiratory sound monitor with
automated respiratory sound analysis ability is urgently required in many
clinical scenarios-such as in monitoring disease progression of coronavirus
disease 2019-to replace conventional auscultation with a handheld stethoscope.
However, a robust computerized respiratory sound analysis algorithm has not yet
been validated in practical applications. In this study, we developed a lung
sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds
(duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels,
13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze
labels, 686 stridor labels, and 4,740 rhonchi labels), and 15,606 discontinuous
adventitious sound labels (all crackles). We conducted benchmark tests for long
short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM
(BiLSTM), bidirectional GRU (BiGRU), convolutional neural network (CNN)-LSTM,
CNN-GRU, CNN-BiLSTM, and CNN-BiGRU models for breath phase detection and
adventitious sound detection. We also conducted a performance comparison
between the LSTM-based and GRU-based models, between unidirectional and
bidirectional models, and between models with and without a CNN. The results
revealed that these models exhibited adequate performance in lung sound
analysis. The GRU-based models outperformed, in terms of F1 scores and areas
under the receiver operating characteristic curves, the LSTM-based models in
most of the defined tasks. Furthermore, all bidirectional models outperformed
their unidirectional counterparts. Finally, the addition of a CNN improved the
accuracy of lung sound analysis, especially in the CAS detection tasks.Comment: 48 pages, 8 figures. To be submitte
Recommended from our members
Automatic lung health screening using respiratory sounds
Significant changes have been made on audio-based technologies over years in several different fields. Healthcare is no exception. One of such avenues is health screening based on respiratory sounds. In this paper, we developed a tool to detect respiratory sounds that come from respiratory infection carrying patients. Linear Predictive Cepstral Coefficient (LPCC)-based features were used to characterize such audio clips. With Multilayer Perceptron (MLP)-based classifier, in our experiment, we achieved the highest possible accuracy of 99.22% that was tested on a publicly available respiratory sounds dataset (ICBHI17) (Rocha et al. Physiol. Meas. 40(3):035,001 20) of size 6800+ clips. In addition to other popular machine learning classifiers, our results outperformed common works that exist in the literature
- …