19 research outputs found
Detection of atrial fibrillation in ECG hand-held devices using a random forest classifier
Atrial Fibrillation (AF) is characterized by chaotic electrical impulses in the atria, which leads to irregular heartbeats and can develop blood clots and stroke. Therefore, early detection of AF is crucial for increasing the success rate of the treatment. This study is focused on detection of AF rhythm using hand-held ECG monitoring devices, in addition to three other classes: normal or sinus rhythm, other rhythms, and too noisy to analyze. The pipeline of the proposed method consists of three major components: preprocessing and feature extraction, feature selection, and classification. In total, 491 hand-crafted features are extracted. Then, 150 features are selected in a feature ranking procedure. The selected features are from time, frequency, time-frequency domains, and phase space reconstruction of the ECG signals. In the final stage, a random forest classifier is used to classify the selected features into one of the four aforementioned ECG classes. Using the scoring mechanism provided by PhysioNet/Computing in Cardiology (CinC) Challenge 2017, the overall score (mean±std) of 81.9±2.6% is achieved over the training dataset in 10-fold cross-validation. The proposed algorithm tied for the first place in the PhysioNet/CinC Challenge 2017 with an overall score of 82.6% (rounded to 83%) on the unseen test dataset.Scopu
Learning Front-end Filter-bank Parameters using Convolutional Neural Networks for Abnormal Heart Sound Detection
Automatic heart sound abnormality detection can play a vital role in the
early diagnosis of heart diseases, particularly in low-resource settings. The
state-of-the-art algorithms for this task utilize a set of Finite Impulse
Response (FIR) band-pass filters as a front-end followed by a Convolutional
Neural Network (CNN) model. In this work, we propound a novel CNN architecture
that integrates the front-end bandpass filters within the network using
time-convolution (tConv) layers, which enables the FIR filter-bank parameters
to become learnable. Different initialization strategies for the learnable
filters, including random parameters and a set of predefined FIR filter-bank
coefficients, are examined. Using the proposed tConv layers, we add constraints
to the learnable FIR filters to ensure linear and zero phase responses.
Experimental evaluations are performed on a balanced 4-fold cross-validation
task prepared using the PhysioNet/CinC 2016 dataset. Results demonstrate that
the proposed models yield superior performance compared to the state-of-the-art
system, while the linear phase FIR filterbank method provides an absolute
improvement of 9.54% over the baseline in terms of an overall accuracy metric.Comment: 4 pages, 6 figures, IEEE International Engineering in Medicine and
Biology Conference (EMBC
A Robust Interpretable Deep Learning Classifier for Heart Anomaly Detection Without Segmentation
Traditionally, abnormal heart sound classification is framed as a three-stage
process. The first stage involves segmenting the phonocardiogram to detect
fundamental heart sounds; after which features are extracted and classification
is performed. Some researchers in the field argue the segmentation step is an
unwanted computational burden, whereas others embrace it as a prior step to
feature extraction. When comparing accuracies achieved by studies that have
segmented heart sounds before analysis with those who have overlooked that
step, the question of whether to segment heart sounds before feature extraction
is still open. In this study, we explicitly examine the importance of heart
sound segmentation as a prior step for heart sound classification, and then
seek to apply the obtained insights to propose a robust classifier for abnormal
heart sound detection. Furthermore, recognizing the pressing need for
explainable Artificial Intelligence (AI) models in the medical domain, we also
unveil hidden representations learned by the classifier using model
interpretation techniques. Experimental results demonstrate that the
segmentation plays an essential role in abnormal heart sound classification.
Our new classifier is also shown to be robust, stable and most importantly,
explainable, with an accuracy of almost 100% on the widely used PhysioNet
dataset
Classification of segmented phonocardiograms by convolutional neural networks
One of the first causes of human deaths in recent years in our world is heart diseases or cardiovascular diseases. Phonocardiograms (PCG) and electrocardiograms (ECG) are usually used for the detection of heart diseases. Studies on cardiac signals focus especially on the classification of heart sounds. Naturally, researches generally try to increase accuracy of classification. For this purpose, many studies use for the segmentation of heart sounds into S1 and S2 segments by methods such as Shannon energy, discreet wavelet transform and Hilbert transform. In this study, two different heart sounds data in the PhysioNet Atraining data set such as normal, and abnormal are classified with convolutional neural networks. For this purpose, the S1 and S2 parts of the heart sounds were segmented by the resampled energy method. The images of Phonocardiograms which were obtained from S1 and S2 parts in the heart sounds were used for classification. The resized small images of phonocardiogram were classified by convolutional neural networks. The obtained results were compared with the results from previous studies. The classification with CNN has performance as classification accuracy of 97.21%, sensitivity of 94.78%, and specificity of 99.65%. According to this, CNN classification with segmented S1-S2 sounds showed better results than the results of previous studies. In studies carried out, it has been seen that segmentation and
convolutional neural networks increases the accuracy of classification and contributes to the classification studies efficiently
NRC-Net: Automated noise robust cardio net for detecting valvular cardiac diseases using optimum transformation method with heart sound signals
Cardiovascular diseases (CVDs) can be effectively treated when detected
early, reducing mortality rates significantly. Traditionally, phonocardiogram
(PCG) signals have been utilized for detecting cardiovascular disease due to
their cost-effectiveness and simplicity. Nevertheless, various environmental
and physiological noises frequently affect the PCG signals, compromising their
essential distinctive characteristics. The prevalence of this issue in
overcrowded and resource-constrained hospitals can compromise the accuracy of
medical diagnoses. Therefore, this study aims to discover the optimal
transformation method for detecting CVDs using noisy heart sound signals and
propose a noise robust network to improve the CVDs classification
performance.For the identification of the optimal transformation method for
noisy heart sound data mel-frequency cepstral coefficients (MFCCs), short-time
Fourier transform (STFT), constant-Q nonstationary Gabor transform (CQT) and
continuous wavelet transform (CWT) has been used with VGG16. Furthermore, we
propose a novel convolutional recurrent neural network (CRNN) architecture
called noise robust cardio net (NRC-Net), which is a lightweight model to
classify mitral regurgitation, aortic stenosis, mitral stenosis, mitral valve
prolapse, and normal heart sounds using PCG signals contaminated with
respiratory and random noises. An attention block is included to extract
important temporal and spatial features from the noisy corrupted heart
sound.The results of this study indicate that,CWT is the optimal transformation
method for noisy heart sound signals. When evaluated on the GitHub heart sound
dataset, CWT demonstrates an accuracy of 95.69% for VGG16, which is 1.95%
better than the second-best CQT transformation technique. Moreover, our
proposed NRC-Net with CWT obtained an accuracy of 97.4%, which is 1.71% higher
than the VGG16