14 research outputs found

    2D respiratory sound analysis to detect lung abnormalities

    Get PDF
    In this paper, we analyze deep visual features from 2D data representation(s) of the respiratory sound to detect evidence of lung abnormalities. The primary motivation behind this is that visual cues are more important in decision-making than raw data (lung sound). Early detection and prompt treatments are essential for any future possible respiratory disorders, and respiratory sound is proven to be one of the biomarkers. In contrast to state-of-the-art approaches, we aim at understanding/analyzing visual features using our Convolutional Neural Networks (CNN) tailored Deep Learning Models, where we consider all possible 2D data such as Spectrogram, Mel-frequency Cepstral Coefficients (MFCC), spectral centroid, and spectral roll-off. In our experiments, using the publicly available respiratory sound database named ICBHI 2017 (5.5 hours of recordings containing 6898 respiratory cycles from 126 subjects), we received the highest performance with the area under the curve of 0.79 from Spectrogram as opposed to 0.48 AUC from the raw data from a pre-trained deep learning model: VGG16. We also used machine learning algorithms using reliable data to improve Our study proved that 2D data representation could help better understand/analyze lung abnormalities as compared to 1D data. Our findings are also contrasted with those of earlier studies. For purposes of generality, we used the MFCC of neutrinos to determine if picture data or raw data produced superior results

    Respiratory Sound Analysis for the Evidence of Lung Health

    Get PDF
    Significant changes have been made on audio-based technologies over years in several different fields along with healthcare industry. Analysis of Lung sounds is a potential source of noninvasive, quantitative information along with additional objective on the status of the pulmonary system. To do that medical professionals listen to sounds heard over the chest wall at different positions with a stethoscope which is known as auscultation and is important in diagnosing respiratory diseases. At times, possibility of inaccurate interpretation of respiratory sounds happens because of clinician’s lack of considerable expertise or sometimes trainees such as interns and residents misidentify respiratory sounds. We have built a tool to distinguish healthy respiratory sound from non-healthy ones that come from respiratory infection carrying patients. The audio clips were characterized using Linear Predictive Cepstral Coefficient (LPCC)-based features and the highest possible accuracy of 99.22% was obtained with a Multi-Layer Perceptron (MLP)- based classifier on the publicly available ICBHI17 respiratory sounds dataset [1] of size 6800+ clips. The system also outperformed established works in literature and other machine learning techniques. In future we will try to use larger dataset with other acoustic techniques along with deep learning-based approaches and try to identify the nature and severity of infection using respiratory sounds

    Machine Learning-Based Classification of Pulmonary Diseases through Real-Time Lung Sounds

    Get PDF
        The study presents a computer-based automated system that employs machine learning to classify pulmonary diseases using lung sound data collected from hospitals. Denoising techniques, such as discrete wavelet transform and variational mode decomposition, are applied to enhance classifier performance. The system combines cepstral features, such as Mel-frequency cepstrum coefficients and gammatone frequency cepstral coefficients, for classification. Four machine learning classifiers, namely the decision tree, k-nearest neighbor, linear discriminant analysis, and random forest, are compared. Evaluation metrics such as accuracy, recall, specificity, and f1 score are employed. This study includes patients affected by chronic obstructive pulmonary disease, asthma, bronchiectasis, and healthy individuals. The results demonstrate that the random forest classifier outperforms the others, achieving an accuracy of 99.72% along with 100% recall, specificity, and f1 scores. The study suggests that the computer-based system serves as a decision-making tool for classifying pulmonary diseases, especially in resource-limited settings

    Characterization And Classification Of Asthmatic Wheeze Sounds According To Severity Level Using Spectral Integrated Features

    Get PDF
    This study aimed to investigate and classify wheeze sounds of asthmatic patients according to their severity level (mild, moderate and severe) using spectral integrated (SI) features. Method: Segmented and validated wheeze sounds were obtained from auscultation recordings of the trachea and lower lung base of 55 asthmatic patients during tidal breathing manoeuvres. The segments were multi-labelled into 9 groups based on the auscultation location and/or breath phases. Bandwidths were selected based on the physiology, and a corresponding SI feature was computed for each segment. Univariate and multivariate statistical analyses were then performed to investigate the discriminatory behaviour of the features with respect to the severity levels in the various groups. The asthmatic severity levels in the groups were then classified using the ensemble (ENS), support vector machine (SVM) and k-nearest neighbour (KNN) methods. Results and conclusion: All statistical comparisons exhibited a significant difference (p < 0.05) among the severity levels with few exceptions. In the classification experiments, the ensemble classifier exhibited better performance in terms of sensitivity, specificity and positive predictive value (PPV). The trachea inspiratory group showed the highest classification performance compared with all the other groups. Overall, the best PPV for the mild, moderate and severe samples were 95% (ENS), 88% (ENS) and 90% (SVM), respectively. With respect to location, the tracheal related wheeze sounds were most sensitive and specific predictors of asthma severity levels. In addition, the classification performances of the inspiratory and expiratory related groups were comparable, suggesting that the samples from these locations are equally informativ

    Identification Of Asthma Severity Levels Through Wheeze Sound Characterization And Classification Using Integrated Power Features

    Get PDF
    This study aimed to investigate and classify wheeze sound characteristics according to asthma severity levels (mild, moderate and severe) using integrated power (IP) features. Method: Validated and segmented wheeze sounds were obtained from the lower lung base (LLB) and trachea recordings of 55 asthmatic patients with different severity levels during tidal breathing manoeuvres. From the segments, nine datasets were obtained based on the auscultation location, breath phases and their combination. In this study, IP features were extracted for assessing asthma severity. Subsequently, univariate and multivariate (MANOVA) statistical analyses were separately implemented to analyse behaviour of wheeze sounds according to severity levels. Furthermore, the ensemble (ENS), knearest- neighbour (KNN) and support vector machine (SVM) classifiers were applied to classify the asthma severity levels. Results and conclusion: The univariate results of this study indicated that the majority of features significantly discriminated (p < 0.05) the severity levels in all the datasets. The MANOVA results yielded significantly (p < 0.05) large effect size in all datasets (including LLB-related) and almost all post hoc results were significant(p < 0.05). A comparison ofthe performance of classifiers revealed that eight ofthe nine datasets showed improved performance with the ENS classifier. The Trachea inspiratory (T-Inspir) dataset produced the highest performance. The overall best positive predictive rate (PPR) for the mild, moderate and severe severity levels were 100% (KNN), 92% (SVM) and 94% (ENS) respectively. Analysis related to auscultation locations revealed that tracheal wheeze sounds are more specific and sensitive predictors of asthma severity. Additionally, phase related investigations indicated that expiratory and inspiratory wheeze sounds are equally informative for the classification of asthma severit

    Adaptation of Pre-trained Deep Neural Networks for Sound Event Detection Facilitating Smart Homecare

    Get PDF
    As foreseen by numerous researchers, the worldwide demographic changes of the elderly population in 2050 will be expected to grow by over 30% in the global population, which has urged to development of cost-efficient and effective automated sound recognition systems to assist the well-being of the self-living older people in their homecare environment. Consequently, in recent research on sound event classification and detection systems, there has been increasing research on adapting the pre-trained model YAMNet because it can classify 521 sound event classes trained with a large-scale AudioSet dataset. Despite the huge potential, the main problem of using the YAMNet predictions was observed in our early investigation difficulty in finding associated YAMNet classes for the target events predefined in public benchmark acoustic datasets. This study aimed to investigate this class mapping complication to adapt the YAMNet pre-trained model into a sound event detection system with temporal information for monitoring abnormalities in residential homecare environments. A new Y-MCC methodology was developed based on the Matthews correlation coefficient (MCC) to resolve the original YAMNet class map and produce new class maps according to the MCC thresholds. The performance of the Y-MCC system successfully demonstrated the SED system feasibility by achieving the best F1 score of 59.46% in the overall micro-average on the SINS dataset and class-wise F1-score performance of ‘sheep’ at 100% and ‘brushing teeth’ at 96.8% in ESC-50 and ‘vacuum cleaner’ at 94.7% in SINS, and ‘water tap running’ at 58.5% in TUT-SED 2016 Home datasets. This indicates the potential use of the Y-MCC method for facilitating automated sound event monitoring systems in smart homecare applications

    Wheeze Sound Analysis Using Computer-Based Techniques: A Systematic Review

    Get PDF
    Wheezes are high pitched continuous respiratory acoustic sounds which are produced as a result of airway obstruction. Computer-based analyses of wheeze signals have been extensively used for parametric analysis, spectral analysis, identification of airway obstruction, feature extraction and diseases or pathology classification. While this area is currently an active field of research, the available literature has not yet been reviewed. This systematic review identified articles describing wheeze analyses using computer-based techniques on the SCOPUS, IEEE Xplore, ACM, PubMed and Springer and Elsevier electronic databases. After a set of selection criteria was applied, 41 articles were selected for detailed analysis. The findings reveal that 1) computerized wheeze analysis can be used for the identification of disease severity level or pathology, 2) further research is required to achieve acceptable rates of identification on the degree of airway obstruction with normal breathing, 3) analysis using combinations of features and on subgroups of the respiratory cycle has provided a pathway to classify various diseases or pathology that stem from airway obstructio

    Algoritmos de procesado de señal basados en Non-negative Matrix Factorization aplicados a la separación, detección y clasificación de sibilancias en señales de audio respiratorias monocanal

    Get PDF
    La auscultación es el primer examen clínico que un médico lleva a cabo para evaluar el estado del sistema respiratorio, debido a que es un método no invasivo, de bajo coste, fácil de realizar y seguro para el paciente. Sin embargo, el diagnóstico que se deriva de la auscultación sigue siendo un diagnóstico subjetivo que se encuentra condicionado a la habilidad, experiencia y entrenamiento de cada médico en la escucha e interpretación de las señales de audio respiratorias. En consecuencia, se producen un alto porcentaje de diagnósticos erróneos que ponen en riesgo la salud de los pacientes e incrementan el coste asociado a los centros de salud. Esta Tesis propone nuevos métodos basados en Non-negative Matrix Factorization aplicados a la separación, detección y clasificación de sonidos sibilantes para proporcionar una vía de información complementaria al médico que ayude a mejorar la fiabilidad del diagnóstico emitido por el especialista. Auscultation is the first clinical examination that a physician performs to evaluate the condition of the respiratory system, because it is a non-invasive, low-cost, easy-to-perform and safe method for the patient. However, the diagnosis derived from auscultation remains a subjective diagnosis that is conditioned by the ability, experience and training of each physician in the listening and interpretation of respiratory audio signals. As a result, a high percentage of misdiagnoses are produced that endanger the health of patients and increase the cost associated with health centres. This Thesis proposes new methods based on Non-negative Matrix Factorization applied to separation, detection and classification of wheezing sounds in order to provide a complementary information pathway to the physician that helps to improve the reliability of the diagnosis made by the doctor.Tesis Univ. Jaén. Departamento INGENIERÍA DE TELECOMUNICACIÓ
    corecore