41 research outputs found

    Respiratory Sound Analysis for the Evidence of Lung Health

    Get PDF
    Significant changes have been made on audio-based technologies over years in several different fields along with healthcare industry. Analysis of Lung sounds is a potential source of noninvasive, quantitative information along with additional objective on the status of the pulmonary system. To do that medical professionals listen to sounds heard over the chest wall at different positions with a stethoscope which is known as auscultation and is important in diagnosing respiratory diseases. At times, possibility of inaccurate interpretation of respiratory sounds happens because of clinician’s lack of considerable expertise or sometimes trainees such as interns and residents misidentify respiratory sounds. We have built a tool to distinguish healthy respiratory sound from non-healthy ones that come from respiratory infection carrying patients. The audio clips were characterized using Linear Predictive Cepstral Coefficient (LPCC)-based features and the highest possible accuracy of 99.22% was obtained with a Multi-Layer Perceptron (MLP)- based classifier on the publicly available ICBHI17 respiratory sounds dataset [1] of size 6800+ clips. The system also outperformed established works in literature and other machine learning techniques. In future we will try to use larger dataset with other acoustic techniques along with deep learning-based approaches and try to identify the nature and severity of infection using respiratory sounds

    ILSA 2017 in Tromsø : proceedings from the 42nd annual conference of the International Lung Sound Association

    Get PDF
    Edited by Hasse Medbye, med bidrag fra flere.<brThe usefulness of lung auscultation is changing. It depends on how well practitioners understand the generation of sounds. It also depends on their knowledge on how lung sounds are associated with lung and heart diseases, as well as with other factors such as ageing and smoking habits. In clinical practice, practitioners need to give sufficient attention to lung auscultation, and they should use the same terminology, or at least understand each other’s use of terms. Technological innovations lead to an extended use of lung auscultation. Continuous monitoring of lung sounds is now possible, and computers can extract more information from the complex lung sounds than human hearing is capable of. Learning how to carry out lung auscultation and to interpret the sounds are essential skills in the education of doctors and other health professionals. Thus, new computer based learning tools for the study of recorded sounds will be helpful. In this conference there will be focus on all these determinants for efficient lung auscultation. In addition to free oral presentations, we have three symposia: on computerized analysis based on machine learning, on diagnostics, and on learning lung sounds, including the psychology of hearing. The symposia include extended presentations from invited speakers. The 42nd conference is the first in history arranged by a research unit for general practice. Primary care doctors are probably the group of health professionals that put the greatest emphasis on lung auscultation in their clinical work. Many patients with chest symptoms consult without a known diagnosis, and several studies have shown that general practitioners pay attention to crackles and wheezes when making decisions, for instance when antibiotics are prescribed to coughing patients. In hospital, the diagnosis of lung diseases is more strongly influenced by technologies such as radiography and blood gas analysis. Since lung auscultation holds a strong position in the work of primary care doctors, I think it is just timely, that the 42nd ILSA conference is hosted by General Practice Research Unit in Tromsø. I hope all participants will find presentations of importance, and that the stay in Tromsø will be enjoyable

    Characterization And Classification Of Asthmatic Wheeze Sounds According To Severity Level Using Spectral Integrated Features

    Get PDF
    This study aimed to investigate and classify wheeze sounds of asthmatic patients according to their severity level (mild, moderate and severe) using spectral integrated (SI) features. Method: Segmented and validated wheeze sounds were obtained from auscultation recordings of the trachea and lower lung base of 55 asthmatic patients during tidal breathing manoeuvres. The segments were multi-labelled into 9 groups based on the auscultation location and/or breath phases. Bandwidths were selected based on the physiology, and a corresponding SI feature was computed for each segment. Univariate and multivariate statistical analyses were then performed to investigate the discriminatory behaviour of the features with respect to the severity levels in the various groups. The asthmatic severity levels in the groups were then classified using the ensemble (ENS), support vector machine (SVM) and k-nearest neighbour (KNN) methods. Results and conclusion: All statistical comparisons exhibited a significant difference (p < 0.05) among the severity levels with few exceptions. In the classification experiments, the ensemble classifier exhibited better performance in terms of sensitivity, specificity and positive predictive value (PPV). The trachea inspiratory group showed the highest classification performance compared with all the other groups. Overall, the best PPV for the mild, moderate and severe samples were 95% (ENS), 88% (ENS) and 90% (SVM), respectively. With respect to location, the tracheal related wheeze sounds were most sensitive and specific predictors of asthma severity levels. In addition, the classification performances of the inspiratory and expiratory related groups were comparable, suggesting that the samples from these locations are equally informativ

    Classification of Wheeze Sounds Using Wavelets and Neural Networks

    Get PDF
    Abstract. Wheezes are one of the most important adventitious sounds in pulmonary system. They are observed in asthma, chronic obstructive pulmonary disease (COPD) and bronchitis. The purpose of this research is to analyze wheeze sounds and classify them as monophonic and polyphonic types. Data is acquired in normal hospital conditions by a typical stethoscope. Various statistical features are extracted from coefficients of 7 different wavelets. Then according to ROC curves, groups of more powerful features are selected. We use multilayer perceptron (MLP) neural network as a classifier. The experimental results show that using a set of 15 selected features and a 15-45-2 MLP network, wheeze sounds could be classified with 89.28% accuracy

    On the development of intelligent medical systems for pre-operative anaesthesia assessment

    Get PDF
    This thesis describes the research and development of a decision support tool for determining a medical patient's suitability for surgical anaesthesia. At present, there is a change in the way that patients are clinically assessedp rior to surgery. The pre-operative assessment, usually conducted by a qualified anaesthetist, is being more frequently performed by nursing grade staff. The pre-operative assessmenet xists to minimise the risk of surgical complications for the patient. Nursing grade staff are often not as experienced as qualified anaesthetists, and thus are not as well suited to the role of performing the pre-operative assessment. This research project used data collected during pre-operative assessments to develop a decision support tool that would assist the nurse (or anaesthetist) in determining whether a patient is suitable for surgical anaesthesia. The three main objectives are: firstly, to research and develop an automated intelligent systems technique for classifying heart and lung sounds and hence identifying cardio-respiratory pathology. Secondly, to research and develop an automated intelligent systems technique for assessing the patient's blood oxygen level and pulse waveform. Finally, to develop a decision support tool that would combine the assessmentsa bove in forming a decision as to whether the patient is suitable for surgical anaesthesia. Clinical data were collected from hospital outpatient departments and recorded alongside the diagnoses made by a qualified anaesthetist. Heart and lung sounds were collected using an electronic stethoscope. Using this data two ensembles of artificial neural networks were trained to classify the different heart and lung sounds into different pathology groups. Classification accuracies up to 99.77% for the heart sounds, and 100% for the lung sounds has been obtained. Oxygen saturation and pulse waveform measurements were recorded using a pulse oximeter. Using this data an artificial neural network was trained to discriminate between normal and abnormal pulse waveforms. A discrimination accuracy of 98% has been obtained from the system. A fuzzy inference system was generated to classify the patient's blood oxygen level as being either an inhibiting or non-inhibiting factor in their suitability for surgical anaesthesia. When tested the system successfully classified 100% of the test dataset. A decision support tool, applying the genetic programming evolutionary technique to a fuzzy classification system was created. The decision support tool combined the results from the heart sound, lung sound and pulse oximetry classifiers in determining whether a patient was suitable for surgical anaesthesia. The evolved fuzzy system attained a classification accuracy of 91.79%. The principal conclusion from this thesis is that intelligent systems, such as artificial neural networks, genetic programming, and fuzzy inference systems, can be successfully applied to the creation of medical decision support tools.EThOS - Electronic Theses Online ServiceMedicdirect.co.uk Ltd.GBUnited Kingdo

    Lung Sounds Classification Based on Time Domain Features

    Get PDF
    Signal complexity in lung sounds is assumed to be able to differentiate and classify characteristic lung sound between normal and abnormal in most cases. Previous research has employed a variety of modification approaches to obtain lung sound features. In contrast to earlier research, time-domain features were used to extract features in lung sound classification. Electromyogram (EMG) signal analysis frequently employs this time-domain characteristic. Time-domain features are MAV, SSI, Var, RMS, LOG, WL, AAC, DASDV, and AFB. The benefit of this method is that it allows for direct feature extraction without the requirement for transformation. Several classifiers were used to examine five different types of lung sound data. The highest accuracy was 93.9 percent, obtained Using the decision tree with 9 types of time-domain features. The proposed method could extract features from lung sounds as an alternative

    Novel Measurements of Cough and Breathing Abnormalities during Sleep in Cystic Fibrosis

    Get PDF
    This Doctor of Philosophy thesis describes cystic fibrosis (CF), sleep parameters and novel measurement techniques to determine the effect of lung disease on sleep using non-invasive techniques. Cystic Fibrosis (CF) is characterised by lungs that are normal at birth, but as lung disease progresses with age, adults with CF can develop sleep abnormalities including alteration in sleep architecture and sleep disordered breathing. This thesis seeks to investigate simple non-invasive measures which can detect abnormalities of sleep and breathing in CF adults. The identification of respiratory sounds (normal lung sounds, coughs, crackles, wheezes and snores) will be examined using the non-invasive sleep and breathing measurement device, the Sonomat. The characterisation of these respiratory sounds will be based on spectrographic and audio analysis of the Sonomat. Cross-sectional and longitudinal analysis of adults with CF using polysomnography and the Sonomat will further assess objective sleep and breathing abnormalities. Additional to the examination of objective measurements of sleep, subjective evaluation using CF-specific and sleep-specific questionnaires will assess subjective sleep quality and QoL in adults with CF

    Towards using Cough for Respiratory Disease Diagnosis by leveraging Artificial Intelligence: A Survey

    Full text link
    Cough acoustics contain multitudes of vital information about pathomorphological alterations in the respiratory system. Reliable and accurate detection of cough events by investigating the underlying cough latent features and disease diagnosis can play an indispensable role in revitalizing the healthcare practices. The recent application of Artificial Intelligence (AI) and advances of ubiquitous computing for respiratory disease prediction has created an auspicious trend and myriad of future possibilities in the medical domain. In particular, there is an expeditiously emerging trend of Machine learning (ML) and Deep Learning (DL)-based diagnostic algorithms exploiting cough signatures. The enormous body of literature on cough-based AI algorithms demonstrate that these models can play a significant role for detecting the onset of a specific respiratory disease. However, it is pertinent to collect the information from all relevant studies in an exhaustive manner for the medical experts and AI scientists to analyze the decisive role of AI/ML. This survey offers a comprehensive overview of the cough data-driven ML/DL detection and preliminary diagnosis frameworks, along with a detailed list of significant features. We investigate the mechanism that causes cough and the latent cough features of the respiratory modalities. We also analyze the customized cough monitoring application, and their AI-powered recognition algorithms. Challenges and prospective future research directions to develop practical, robust, and ubiquitous solutions are also discussed in detail.Comment: 30 pages, 12 figures, 9 table

    Effects of Preprocessing Techniques on Cough Based Machine Learning Diagnosis

    Get PDF
    COVID-19 pandemic outbreak has taken the world by storm in the 18 months and the ramifications are by no means curtailing. The need of the hour with COVID-19 and other pulmonary diseases is a quick online diagnosis by handheld devices. In the light of these constraints, scientists are relying on audio based automated techniques since clinicians routinely use audio cues from the human body (e.g. vascular murmurs, respiration, pulse, bowel sounds etc.) as markers for diagnoses of diseases or the development of ailments. Until recently, such signals have been commonly obtained during scheduled visits via manual auscultation. Research has also begun to use digital technologies to collect body sounds for cardiovascular or respiratory tests, e.g. from stethoscopes, which can then be used for automated artificial intelligence- based analysis. An early study has promised to detect COVID-19 from cough and speech diagnostic signals. This research work describes how preprocessing techniques can enhance the performance of a methodology established over a large-scale crowd-sourced dataset of respiratory audios and in what ways preprocessing techniques ameliorate the performance of cough based diagnosis. Our findings demonstrate that a machine learning classifier will better distinguish a healthy individual from individual with cough due to bronchitis, pertussis or COVID-19 by applying preprocessing techniques. Robust results have been procured by user-based data split-up for the K-fold learning methodology. The results show a noticeable increase in the efficacy of the application of preprocessing techniques in an algorithmic pipeline. These results are rudimentary and only the tip of the iceberg of the potential of cough and audio-based machine learning. The research opens the door for enhancing the performance of lightweight machine algorithms to be comparable with their more complicated and resource-consuming counterparts. Such advancements can be of paramount significance in the practical field of application deployment

    Pneumonia in Children

    Get PDF
    corecore