1,376 research outputs found

    DIGITAL ANALYSIS OF CARDIAC ACOUSTIC SIGNALS IN CHILDREN

    Get PDF
    DIGITAL ANALYSIS OF CARDIAC ACOUSTIC SIGNALS IN CHILDREN Milad El-Segaier, MD Division of Paediatric Cardiology, Department of Paediatrics, Lund University Hospital, Lund, Sweden SUMMARY Despite tremendous development in cardiac imaging, use of the stethoscope and cardiac auscultation remains the primary diagnostic tool in evaluation of cardiac pathology. With the advent of miniaturized and powerful technology for data acquisition, display and digital signal processing, the possibilities for detecting cardiac pathology by signal analysis have increased. The objective of this study was to develop a simple, cost-effective diagnostic tool for analysis of cardiac acoustic signals. Heart sounds and murmurs were recorded in 360 children with a single-channel device and in 15 children with a multiple-channel device. Time intervals between acoustic signals were measured. Short-time Fourier transform (STFT) analysis was used to present the acoustic signals to a digital algorithm for detection of heart sounds, define systole and diastole and analyse the spectrum of a cardiac murmur. A statistical model for distinguishing physiological murmurs from pathological findings was developed using logistic regression analysis. The receiver operating characteristic (ROC) curve was used to evaluate the discriminating ability of the developed model. The sensitivities and specificities of the model were calculated at different cut-off points. Signal deconvolution using blind source separation (BSS) analysis was performed for separation of signals from different sources. The first and second heart sounds (S1 and S2) were detected with high accuracy (100% for the S1 and 97% for the S2) independently of heart rates and presence of a murmur. The systole and diastole were defined, but only systolic murmur was analysed in this work. The developed statistical model showed excellent prediction ability (area under the curve, AUC = 0.995) in distinguishing a physiological murmur from a pathological one with high sensitivity and specificity (98%). In further analyses deconvolution of the signals was successfully performed using blind separation analysis. This yielded two spatially independent sources, heart sounds (S1 and S2) in one component, and a murmur in another. The study supports the view that a cost-effective diagnostic device would be useful in primary health care. It would diminish the need for referring children with cardiac murmur to cardiac specialists and the load on the health care system. Likewise, it would help to minimize the psychological stress experienced by the children and their parents at an early stage of the medical care

    Machine Learning-Based Classification of Pulmonary Diseases through Real-Time Lung Sounds

    Get PDF
        The study presents a computer-based automated system that employs machine learning to classify pulmonary diseases using lung sound data collected from hospitals. Denoising techniques, such as discrete wavelet transform and variational mode decomposition, are applied to enhance classifier performance. The system combines cepstral features, such as Mel-frequency cepstrum coefficients and gammatone frequency cepstral coefficients, for classification. Four machine learning classifiers, namely the decision tree, k-nearest neighbor, linear discriminant analysis, and random forest, are compared. Evaluation metrics such as accuracy, recall, specificity, and f1 score are employed. This study includes patients affected by chronic obstructive pulmonary disease, asthma, bronchiectasis, and healthy individuals. The results demonstrate that the random forest classifier outperforms the others, achieving an accuracy of 99.72% along with 100% recall, specificity, and f1 scores. The study suggests that the computer-based system serves as a decision-making tool for classifying pulmonary diseases, especially in resource-limited settings

    ELM and K-nn machine learning in classification of breath sounds signals

    Get PDF
    The acquisition of Breath sounds (BS) signals from a human respiratory system with an electronic stethoscope, provide and offer prominent information which helps the doctors to diagnosis and classification of pulmonary diseases. Unfortunately, this BS signals with other biological signals have a non-stationary nature according to the variation of the lung volume, and this nature makes it difficult to analyze and classify between several diseases. In this study, we were focused on comparing the ability of the extreme learning machine (ELM) and k-nearest neighbour (K-nn) machine learning algorithms in the classification of adventitious and normal breath sounds. To do so, the empirical mode decomposition (EMD) was used in this work to analyze BS, this method is rarely used in the breath sounds analysis. After the EMD decomposition of the signals into Intrinsic Mode Functions (IMFs), the Hjorth descriptors (Activity) and Permutation Entropy (PE) features were extracted from each IMFs and combined for classification stage. The study has found that the combination of features (activity and PE) yielded an accuracy of 90.71%, 95% using ELM and K-nn respectively in binary classification (normal and abnormal breath sounds), and 83.57%, 86.42% in multiclass classification (five classes)

    An approach for automatic identification of fundamental and additional sounds from cardiac sounds recordings.

    Get PDF
    This paper presents an approach for automatic segmentation of cardiac events from non-invasive sounds recordings, without the need of having an auxiliary signal reference. In addition, methods are proposed to subsequently differentiate cardiac events which correspond to normal cardiac cycles, from those which are due to abnormal activity of the heart. The detection of abnormal sounds is based on a model built with parameters which are obtained following feature extraction from those segments that were previously identified as normal fundamental heart sounds. The proposed algorithm achieved a sensitivity of 91.79% and 89.23% for the identification of normal fundamental, S1 and S2 sounds, and a true positive (TP) rate of 81.48% for abnormal additional sounds. These results were obtained using the PASCAL Classifying Heart Sounds challenge (CHSC) database

    Convolutional neural network for breathing phase detection in lung sounds

    Get PDF
    We applied deep learning to create an algorithm for breathing phase detection in lung sound recordings, and we compared the breathing phases detected by the algorithm and manually annotated by two experienced lung sound researchers. Our algorithm uses a convolutional neural network with spectrograms as the features, removing the need to specify features explicitly. We trained and evaluated the algorithm using three subsets that are larger than previously seen in the literature. We evaluated the performance of the method using two methods. First, discrete count of agreed breathing phases (using 50% overlap between a pair of boxes), shows a mean agreement with lung sound experts of 97% for inspiration and 87% for expiration. Second, the fraction of time of agreement (in seconds) gives higher pseudo-kappa values for inspiration (0.73-0.88) than expiration (0.63-0.84), showing an average sensitivity of 97% and an average specificity of 84%. With both evaluation methods, the agreement between the annotators and the algorithm shows human level performance for the algorithm. The developed algorithm is valid for detecting breathing phases in lung sound recordings

    A Comprehensive Survey on Heart Sound Analysis in the Deep Learning Era

    Full text link
    Heart sound auscultation has been demonstrated to be beneficial in clinical usage for early screening of cardiovascular diseases. Due to the high requirement of well-trained professionals for auscultation, automatic auscultation benefiting from signal processing and machine learning can help auxiliary diagnosis and reduce the burdens of training professional clinicians. Nevertheless, classic machine learning is limited to performance improvement in the era of big data. Deep learning has achieved better performance than classic machine learning in many research fields, as it employs more complex model architectures with stronger capability of extracting effective representations. Deep learning has been successfully applied to heart sound analysis in the past years. As most review works about heart sound analysis were given before 2017, the present survey is the first to work on a comprehensive overview to summarise papers on heart sound analysis with deep learning in the past six years 2017--2022. We introduce both classic machine learning and deep learning for comparison, and further offer insights about the advances and future research directions in deep learning for heart sound analysis

    Doctor of Philosophy

    Get PDF
    dissertationPatients sometimes suffer apnea during sedation procedures or after general anesthesia. Apnea presents itself in two forms: respiratory depression (RD) and respiratory obstruction (RO). During RD the patients' airway is open but they lose the drive to breathe. During RO the patients' airway is occluded while they try to breathe. Patients' respiration is rarely monitored directly, but in a few cases is monitored with a capnometer. This dissertation explores the feasibility of monitoring respiration indirectly using an acoustic sensor. In addition to detecting apnea in general, this technique has the possibility of differentiating between RD and RO. Data were recorded on 24 subjects as they underwent sedation. During the sedation, subjects experienced RD or RO. The first part of this dissertation involved detecting periods of apnea from the recorded acoustic data. A method using a parameter estimation algorithm to determine the variance of the noise of the audio signal was developed, and the envelope of the audio data was used to determine when the subject had stopped breathing. Periods of apnea detected by the acoustic method were compared to the periods of apnea detected by the direct flow measurement. This succeeded with 91.8% sensitivity and 92.8% specificity in the training set and 100% sensitivity and 98% specificity in the testing set. The second part of this dissertation used the periods during which apnea was detected to determine if the subject was experiencing RD or RO. The classifications determined from the acoustic signal were compared to the classifications based on the flow measurement in conjunction with the chest and abdomen movements. This did not succeed with a 86.9% sensitivity and 52.6% specificity in the training set, and 100% sensitivity and 0% specificity in the testing set. The third part of this project developed a method to reduce the background sounds that were commonly recorded on the microphone. Additive noise was created to simulate noise generated in typical settings and the noise was removed via an adaptive filter. This succeeded in improving or maintaining apnea detection given the different types of sounds added to the breathing data
    corecore