164 research outputs found
A fused deep learning architecture for viewpoint classification of echocardiography
This study extends the state of the art of deep learning convolutional neural network (CNN) to the classification of video images of echocardiography, aiming at assisting clinicians in diagnosis of heart diseases. Specifically, the architecture of neural networks is established by embracing hand-crafted features within a data-driven learning framework, incorporating both spatial and temporal information sustained by the video images of the moving heart and giving rise to two strands of two-dimensional convolutional neural network (CNN). In particular, the acceleration measurement along the time direction at each point is calculated using dense optical flow technique to represent temporal motion information. Subsequently, the fusion of both networks is conducted via linear integrations of the vectors of class scores obtained from each of the two networks. As a result, this architecture maintains the best classification results for eight viewpoint categories of echo videos with 92.1% accuracy rate whereas 89.5% is achieved using only single spatial CNN network. When concerning only three primary locations, 98% of accuracy rate is realised. In addition, comparisons with a number of well-known hand-engineered approaches are also performed, including 2D KAZE, 2D KAZE with Optical Flow, 3D KAZA, Optical Flow, 2D SIFT and 3D SIFT, which delivers accuracy rate of 89.4%, 84.3%, 87.9%, 79.4%, 83.8% and 73.8% respectively
Fast and accurate classification of echocardiograms using deep learning
Echocardiography is essential to modern cardiology. However, human
interpretation limits high throughput analysis, limiting echocardiography from
reaching its full clinical and research potential for precision medicine. Deep
learning is a cutting-edge machine-learning technique that has been useful in
analyzing medical images but has not yet been widely applied to
echocardiography, partly due to the complexity of echocardiograms' multi view,
multi modality format. The essential first step toward comprehensive computer
assisted echocardiographic interpretation is determining whether computers can
learn to recognize standard views. To this end, we anonymized 834,267
transthoracic echocardiogram (TTE) images from 267 patients (20 to 96 years, 51
percent female, 26 percent obese) seen between 2000 and 2017 and labeled them
according to standard views. Images covered a range of real world clinical
variation. We built a multilayer convolutional neural network and used
supervised learning to simultaneously classify 15 standard views. Eighty
percent of data used was randomly chosen for training and 20 percent reserved
for validation and testing on never seen echocardiograms. Using multiple images
from each clip, the model classified among 12 video views with 97.8 percent
overall test accuracy without overfitting. Even on single low resolution
images, test accuracy among 15 views was 91.7 percent versus 70.2 to 83.5
percent for board-certified echocardiographers. Confusional matrices, occlusion
experiments, and saliency mapping showed that the model finds recognizable
similarities among related views and classifies using clinically relevant image
features. In conclusion, deep neural networks can classify essential
echocardiographic views simultaneously and with high accuracy. Our results
provide a foundation for more complex deep learning assisted echocardiographic
interpretation.Comment: 31 pages, 8 figure
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
Analyzing MRI scans to detect glioblastoma tumor using hybrid deep belief networks
Abstract Glioblastoma (GBM) is a stage 4 malignant tumor in which a large portion of tumor cells are reproducing and dividing at any moment. These tumors are life threatening and may result in partial or complete mental and physical disability. In this study, we have proposed a classification model using hybrid deep belief networks (DBN) to classify magnetic resonance imaging (MRI) for GBM tumor. DBN is composed of stacked restricted Boltzmann machines (RBM). DBN often requires a large number of hidden layers that consists of large number of neurons to learn the best features from the raw image data. Hence, computational and space complexity is high and requires a lot of training time. The proposed approach combines DTW with DBN to improve the efficiency of existing DBN model. The results are validated using several statistical parameters. Statistical validation verifies that the combination of DTW and DBN outperformed the other classifiers in terms of training time, space complexity and classification accuracy
Automated and Interpretable Patient ECG Profiles for Disease Detection, Tracking, and Discovery
The electrocardiogram or ECG has been in use for over 100 years and remains
the most widely performed diagnostic test to characterize cardiac structure and
electrical activity. We hypothesized that parallel advances in computing power,
innovations in machine learning algorithms, and availability of large-scale
digitized ECG data would enable extending the utility of the ECG beyond its
current limitations, while at the same time preserving interpretability, which
is fundamental to medical decision-making. We identified 36,186 ECGs from the
UCSF database that were 1) in normal sinus rhythm and 2) would enable training
of specific models for estimation of cardiac structure or function or detection
of disease. We derived a novel model for ECG segmentation using convolutional
neural networks (CNN) and Hidden Markov Models (HMM) and evaluated its output
by comparing electrical interval estimates to 141,864 measurements from the
clinical workflow. We built a 725-element patient-level ECG profile using
downsampled segmentation data and trained machine learning models to estimate
left ventricular mass, left atrial volume, mitral annulus e' and to detect and
track four diseases: pulmonary arterial hypertension (PAH), hypertrophic
cardiomyopathy (HCM), cardiac amyloid (CA), and mitral valve prolapse (MVP).
CNN-HMM derived ECG segmentation agreed with clinical estimates, with median
absolute deviations (MAD) as a fraction of observed value of 0.6% for heart
rate and 4% for QT interval. Patient-level ECG profiles enabled quantitative
estimates of left ventricular and mitral annulus e' velocity with good
discrimination in binary classification models of left ventricular hypertrophy
and diastolic function. Models for disease detection ranged from AUROC of 0.94
to 0.77 for MVP. Top-ranked variables for all models included known ECG
characteristics along with novel predictors of these traits/diseases.Comment: 13 pages, 6 figures, 1 Table + Supplemen
Neural architecture search of echocardiography view classifiers
Purpose: Echocardiography is the most commonly used modality for assessing the heart in
clinical practice. In an echocardiographic exam, an ultrasound probe samples the heart from
different orientations and positions, thereby creating different viewpoints for assessing the
cardiac function. The determination of the probe viewpoint forms an essential step in automatic
echocardiographic image analysis.
Approach: In this study, convolutional neural networks are used for the automated identification
of 14 different anatomical echocardiographic views (larger than any previous study) in a dataset
of 8732 videos acquired from 374 patients. Differentiable architecture search approach was
utilized to design small neural network architectures for rapid inference while maintaining high
accuracy. The impact of the image quality and resolution, size of the training dataset, and number
of echocardiographic view classes on the efficacy of the models were also investigated.
Results: In contrast to the deeper classification architectures, the proposed models had significantly lower number of trainable parameters (up to 99.9% reduction), achieved comparable
classification performance (accuracy 88.4% to 96%, precision 87.8% to 95.2%, recall 87.1%
to 95.1%) and real-time performance with inference time per image of 3.6 to 12.6 ms.
Conclusion: Compared with the standard classification neural network architectures, the proposed models are faster and achieve comparable classification performance. They also require
less training data. Such models can be used for real-time detection of the standard views
- …