1,454 research outputs found

    Respiratory Sound Analysis for the Evidence of Lung Health

    Get PDF
    Significant changes have been made on audio-based technologies over years in several different fields along with healthcare industry. Analysis of Lung sounds is a potential source of noninvasive, quantitative information along with additional objective on the status of the pulmonary system. To do that medical professionals listen to sounds heard over the chest wall at different positions with a stethoscope which is known as auscultation and is important in diagnosing respiratory diseases. At times, possibility of inaccurate interpretation of respiratory sounds happens because of clinician’s lack of considerable expertise or sometimes trainees such as interns and residents misidentify respiratory sounds. We have built a tool to distinguish healthy respiratory sound from non-healthy ones that come from respiratory infection carrying patients. The audio clips were characterized using Linear Predictive Cepstral Coefficient (LPCC)-based features and the highest possible accuracy of 99.22% was obtained with a Multi-Layer Perceptron (MLP)- based classifier on the publicly available ICBHI17 respiratory sounds dataset [1] of size 6800+ clips. The system also outperformed established works in literature and other machine learning techniques. In future we will try to use larger dataset with other acoustic techniques along with deep learning-based approaches and try to identify the nature and severity of infection using respiratory sounds

    A statistical analysis of cervical auscultation signals from adults with unsafe airway protection

    Get PDF
    Background: Aspiration, where food or liquid is allowed to enter the larynx during a swallow, is recognized as the most clinically salient feature of oropharyngeal dysphagia. This event can lead to short-term harm via airway obstruction or more long-term effects such as pneumonia. In order to non-invasively identify this event using high resolution cervical auscultation there is a need to characterize cervical auscultation signals from subjects with dysphagia who aspirate. Methods: In this study, we collected swallowing sound and vibration data from 76 adults (50 men, 26 women, mean age 62) who underwent a routine videofluoroscopy swallowing examination. The analysis was limited to swallows of liquid with either thin (<5 cps) or viscous (≈300 cps) consistency and was divided into those with deep laryngeal penetration or aspiration (unsafe airway protection), and those with either shallow or no laryngeal penetration (safe airway protection), using a standardized scale. After calculating a selection of time, frequency, and time-frequency features for each swallow, the safe and unsafe categories were compared using Wilcoxon rank-sum statistical tests. Results: Our analysis found that few of our chosen features varied in magnitude between safe and unsafe swallows with thin swallows demonstrating no statistical variation. We also supported our past findings with regard to the effects of sex and the presence or absence of stroke on cervical ausculation signals, but noticed certain discrepancies with regards to bolus viscosity. Conclusions: Overall, our results support the necessity of using multiple statistical features concurrently to identify laryngeal penetration of swallowed boluses in future work with high resolution cervical auscultation

    IMPROVING THE QUALITY, ANALYSIS AND INTERPRETATION OF BODY SOUNDS ACQUIRED IN CHALLENGING CLINICAL SETTINGS

    Get PDF
    Despite advances in medicine and technology, Acute Lower Respiratory Diseases are a leading cause of sickness and mortality worldwide, highly affecting countries where access to appropriate medical technology and expertise is scarce. Chest auscultation provides a low-cost, non-invasive, widely available tool for the examination of pulmonary health. Despite universal adoption, its use is riddled by a number of issues including subjectivity in interpretation and vulnerability to ambient noise, limiting its diagnostic capability. Digital auscultation and computerized methods come as a natural aid towards overcoming such imposed limitations. Focused on the challenges, we address the demanding real-life scenario of pediatric lung auscultation in busy clinical settings. Two major objectives lead to our contributions: 1) Can we improve the quality of the delicate auscultated sounds and reduce unwanted noise contamination; 2) Can we augment the screening capabilities of current stethoscopes using computerized lung sound analysis to capture the presence of abnormal breaths, and can we standardize findings. To address the first objective, we developed an adaptive noise suppression scheme that tackles contamination coming from a variety of sources, including subject-centric and electronic artifacts, and environmental noise. The proposed method was validated using objective and subjective measures including an expert reviewer panel and objective signal quality metrics. Results revealed the ability and superiority of the proposed method to i) suppress unwanted noise when compared to state-of-the-art technology, and ii) faithfully maintain the signature of the delicate body sounds. The second objective was addressed by exploring appropriate feature representations that capture distinct characteristics of body sounds. A biomimetic approach was employed, and the acoustic signal was projected onto high-dimensional spaces spanning time, frequency, temporal dynamics and spectral modulations. Trained classifiers produced localized decisions on these breath content features, indicating lung diseases. Unlike existing literature, our proposed scheme is further able to combine and integrate the localized decisions into individual, patient-level evaluation. A large corpus of annotated patient data was used to validate our approach, demonstrating the superiority of the proposed features and patient evaluation scheme. Overall findings indicate that improved accessible auscultation care is possible, towards creating affordable health care solutions with worldwide impact

    DIGITAL ANALYSIS OF CARDIAC ACOUSTIC SIGNALS IN CHILDREN

    Get PDF
    DIGITAL ANALYSIS OF CARDIAC ACOUSTIC SIGNALS IN CHILDREN Milad El-Segaier, MD Division of Paediatric Cardiology, Department of Paediatrics, Lund University Hospital, Lund, Sweden SUMMARY Despite tremendous development in cardiac imaging, use of the stethoscope and cardiac auscultation remains the primary diagnostic tool in evaluation of cardiac pathology. With the advent of miniaturized and powerful technology for data acquisition, display and digital signal processing, the possibilities for detecting cardiac pathology by signal analysis have increased. The objective of this study was to develop a simple, cost-effective diagnostic tool for analysis of cardiac acoustic signals. Heart sounds and murmurs were recorded in 360 children with a single-channel device and in 15 children with a multiple-channel device. Time intervals between acoustic signals were measured. Short-time Fourier transform (STFT) analysis was used to present the acoustic signals to a digital algorithm for detection of heart sounds, define systole and diastole and analyse the spectrum of a cardiac murmur. A statistical model for distinguishing physiological murmurs from pathological findings was developed using logistic regression analysis. The receiver operating characteristic (ROC) curve was used to evaluate the discriminating ability of the developed model. The sensitivities and specificities of the model were calculated at different cut-off points. Signal deconvolution using blind source separation (BSS) analysis was performed for separation of signals from different sources. The first and second heart sounds (S1 and S2) were detected with high accuracy (100% for the S1 and 97% for the S2) independently of heart rates and presence of a murmur. The systole and diastole were defined, but only systolic murmur was analysed in this work. The developed statistical model showed excellent prediction ability (area under the curve, AUC = 0.995) in distinguishing a physiological murmur from a pathological one with high sensitivity and specificity (98%). In further analyses deconvolution of the signals was successfully performed using blind separation analysis. This yielded two spatially independent sources, heart sounds (S1 and S2) in one component, and a murmur in another. The study supports the view that a cost-effective diagnostic device would be useful in primary health care. It would diminish the need for referring children with cardiac murmur to cardiac specialists and the load on the health care system. Likewise, it would help to minimize the psychological stress experienced by the children and their parents at an early stage of the medical care

    2D respiratory sound analysis to detect lung abnormalities

    Get PDF
    In this paper, we analyze deep visual features from 2D data representation(s) of the respiratory sound to detect evidence of lung abnormalities. The primary motivation behind this is that visual cues are more important in decision-making than raw data (lung sound). Early detection and prompt treatments are essential for any future possible respiratory disorders, and respiratory sound is proven to be one of the biomarkers. In contrast to state-of-the-art approaches, we aim at understanding/analyzing visual features using our Convolutional Neural Networks (CNN) tailored Deep Learning Models, where we consider all possible 2D data such as Spectrogram, Mel-frequency Cepstral Coefficients (MFCC), spectral centroid, and spectral roll-off. In our experiments, using the publicly available respiratory sound database named ICBHI 2017 (5.5 hours of recordings containing 6898 respiratory cycles from 126 subjects), we received the highest performance with the area under the curve of 0.79 from Spectrogram as opposed to 0.48 AUC from the raw data from a pre-trained deep learning model: VGG16. We also used machine learning algorithms using reliable data to improve Our study proved that 2D data representation could help better understand/analyze lung abnormalities as compared to 1D data. Our findings are also contrasted with those of earlier studies. For purposes of generality, we used the MFCC of neutrinos to determine if picture data or raw data produced superior results

    Automatic classification of adventitious respiratory sounds: a (un)solved problem?

    Get PDF
    (1) Background: Patients with respiratory conditions typically exhibit adventitious respiratory sounds (ARS), such as wheezes and crackles. ARS events have variable duration. In this work we studied the influence of event duration on automatic ARS classification, namely, how the creation of the Other class (negative class) affected the classifiers’ performance. (2) Methods: We conducted a set of experiments where we varied the durations of the other events on three tasks: crackle vs. wheeze vs. other (3 Class); crackle vs. other (2 Class Crackles); and wheeze vs. other (2 Class Wheezes). Four classifiers (linear discriminant analysis, support vector machines, boosted trees, and convolutional neural networks) were evaluated on those tasks using an open access respiratory sound database. (3) Results: While on the 3 Class task with fixed durations, the best classifier achieved an accuracy of 96.9%, the same classifier reached an accuracy of 81.8% on the more realistic 3 Class task with variable durations. (4) Conclusion: These results demonstrate the importance of experimental design on the assessment of the performance of automatic ARS classification algorithms. Furthermore, they also indicate, unlike what is stated in the literature, that the automatic classification of ARS is not a solved problem, as the algorithms’ performance decreases substantially under complex evaluation scenarios.publishe

    The electronic stethoscope

    Get PDF
    • …
    corecore