70,552 research outputs found
Short-segment heart sound classification using an ensemble of deep convolutional neural networks
This paper proposes a framework based on deep convolutional neural networks
(CNNs) for automatic heart sound classification using short-segments of
individual heart beats. We design a 1D-CNN that directly learns features from
raw heart-sound signals, and a 2D-CNN that takes inputs of two- dimensional
time-frequency feature maps based on Mel-frequency cepstral coefficients
(MFCC). We further develop a time-frequency CNN ensemble (TF-ECNN) combining
the 1D-CNN and 2D-CNN based on score-level fusion of the class probabilities.
On the large PhysioNet CinC challenge 2016 database, the proposed CNN models
outperformed traditional classifiers based on support vector machine and hidden
Markov models with various hand-crafted time- and frequency-domain features.
Best classification scores with 89.22% accuracy and 89.94% sensitivity were
achieved by the ECNN, and 91.55% specificity and 88.82% modified accuracy by
the 2D-CNN alone on the test set.Comment: 8 pages, 1 figure, conferenc
A Robust Interpretable Deep Learning Classifier for Heart Anomaly Detection Without Segmentation
Traditionally, abnormal heart sound classification is framed as a three-stage
process. The first stage involves segmenting the phonocardiogram to detect
fundamental heart sounds; after which features are extracted and classification
is performed. Some researchers in the field argue the segmentation step is an
unwanted computational burden, whereas others embrace it as a prior step to
feature extraction. When comparing accuracies achieved by studies that have
segmented heart sounds before analysis with those who have overlooked that
step, the question of whether to segment heart sounds before feature extraction
is still open. In this study, we explicitly examine the importance of heart
sound segmentation as a prior step for heart sound classification, and then
seek to apply the obtained insights to propose a robust classifier for abnormal
heart sound detection. Furthermore, recognizing the pressing need for
explainable Artificial Intelligence (AI) models in the medical domain, we also
unveil hidden representations learned by the classifier using model
interpretation techniques. Experimental results demonstrate that the
segmentation plays an essential role in abnormal heart sound classification.
Our new classifier is also shown to be robust, stable and most importantly,
explainable, with an accuracy of almost 100% on the widely used PhysioNet
dataset
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
Audio Classification from Time-Frequency Texture
Time-frequency representations of audio signals often resemble texture
images. This paper derives a simple audio classification algorithm based on
treating sound spectrograms as texture images. The algorithm is inspired by an
earlier visual classification scheme particularly efficient at classifying
textures. While solely based on time-frequency texture features, the algorithm
achieves surprisingly good performance in musical instrument classification
experiments
- …