430 research outputs found
Fast and accurate classification of echocardiograms using deep learning
Echocardiography is essential to modern cardiology. However, human
interpretation limits high throughput analysis, limiting echocardiography from
reaching its full clinical and research potential for precision medicine. Deep
learning is a cutting-edge machine-learning technique that has been useful in
analyzing medical images but has not yet been widely applied to
echocardiography, partly due to the complexity of echocardiograms' multi view,
multi modality format. The essential first step toward comprehensive computer
assisted echocardiographic interpretation is determining whether computers can
learn to recognize standard views. To this end, we anonymized 834,267
transthoracic echocardiogram (TTE) images from 267 patients (20 to 96 years, 51
percent female, 26 percent obese) seen between 2000 and 2017 and labeled them
according to standard views. Images covered a range of real world clinical
variation. We built a multilayer convolutional neural network and used
supervised learning to simultaneously classify 15 standard views. Eighty
percent of data used was randomly chosen for training and 20 percent reserved
for validation and testing on never seen echocardiograms. Using multiple images
from each clip, the model classified among 12 video views with 97.8 percent
overall test accuracy without overfitting. Even on single low resolution
images, test accuracy among 15 views was 91.7 percent versus 70.2 to 83.5
percent for board-certified echocardiographers. Confusional matrices, occlusion
experiments, and saliency mapping showed that the model finds recognizable
similarities among related views and classifies using clinically relevant image
features. In conclusion, deep neural networks can classify essential
echocardiographic views simultaneously and with high accuracy. Our results
provide a foundation for more complex deep learning assisted echocardiographic
interpretation.Comment: 31 pages, 8 figure
Automated interpretation of systolic and diastolic function on the echocardiogram:a multicohort study
Background: Echocardiography is the diagnostic modality for assessing cardiac systolic and diastolic function to diagnose and manage heart failure. However, manual interpretation of echocardiograms can be time consuming and subject to human error. Therefore, we developed a fully automated deep learning workflow to classify, segment, and annotate two-dimensional (2D) videos and Doppler modalities in echocardiograms. Methods: We developed the workflow using a training dataset of 1145 echocardiograms and an internal test set of 406 echocardiograms from the prospective heart failure research platform (Asian Network for Translational Research and Cardiovascular Trials; ATTRaCT) in Asia, with previous manual tracings by expert sonographers. We validated the workflow against manual measurements in a curated dataset from Canada (Alberta Heart Failure Etiology and Analysis Research Team; HEART; n=1029 echocardiograms), a real-world dataset from Taiwan (n=31 241), the US-based EchoNet-Dynamic dataset (n=10 030), and in an independent prospective assessment of the Asian (ATTRaCT) and Canadian (Alberta HEART) datasets (n=142) with repeated independent measurements by two expert sonographers. Findings: In the ATTRaCT test set, the automated workflow classified 2D videos and Doppler modalities with accuracies (number of correct predictions divided by the total number of predictions) ranging from 0·91 to 0·99. Segmentations of the left ventricle and left atrium were accurate, with a mean Dice similarity coefficient greater than 93% for all. In the external datasets (n=1029 to 10 030 echocardiograms used as input), automated measurements showed good agreement with locally measured values, with a mean absolute error range of 9–25 mL for left ventricular volumes, 6–10% for left ventricular ejection fraction (LVEF), and 1·8–2·2 for the ratio of the mitral inflow E wave to the tissue Doppler e' wave (E/e' ratio); and reliably classified systolic dysfunction (LVEF <40%, area under the receiver operating characteristic curve [AUC] range 0·90–0·92) and diastolic dysfunction (E/e' ratio ≥13, AUC range 0·91–0·91), with narrow 95% CIs for AUC values. Independent prospective evaluation confirmed less variance of automated compared with human expert measurements, with all individual equivalence coefficients being less than 0 for all measurements. Interpretation: Deep learning algorithms can automatically annotate 2D videos and Doppler modalities with similar accuracy to manual measurements by expert sonographers. Use of an automated workflow might accelerate access, improve quality, and reduce costs in diagnosing and managing heart failure globally. Funding: A*STAR Biomedical Research Council and A*STAR Exploit Technologies
Real-time Automatic M-mode Echocardiography Measurement with Panel Attention from Local-to-Global Pixels
Motion mode (M-mode) recording is an essential part of echocardiography to
measure cardiac dimension and function. However, the current diagnosis cannot
build an automatic scheme, as there are three fundamental obstructs: Firstly,
there is no open dataset available to build the automation for ensuring
constant results and bridging M-mode echocardiography with real-time instance
segmentation (RIS); Secondly, the examination is involving the time-consuming
manual labelling upon M-mode echocardiograms; Thirdly, as objects in
echocardiograms occupy a significant portion of pixels, the limited receptive
field in existing backbones (e.g., ResNet) composed from multiple convolution
layers are inefficient to cover the period of a valve movement. Existing
non-local attentions (NL) compromise being unable real-time with a high
computation overhead or losing information from a simplified version of the
non-local block. Therefore, we proposed RAMEM, a real-time automatic M-mode
echocardiography measurement scheme, contributes three aspects to answer the
problems: 1) provide MEIS, a dataset of M-mode echocardiograms for instance
segmentation, to enable consistent results and support the development of an
automatic scheme; 2) propose panel attention, local-to-global efficient
attention by pixel-unshuffling, embedding with updated UPANets V2 in a RIS
scheme toward big object detection with global receptive field; 3) develop and
implement AMEM, an efficient algorithm of automatic M-mode echocardiography
measurement enabling fast and accurate automatic labelling among diagnosis. The
experimental results show that RAMEM surpasses existing RIS backbones (with
non-local attention) in PASCAL 2012 SBD and human performances in real-time
MEIS tested. The code of MEIS and dataset are available at
https://github.com/hanktseng131415go/RAME
Feature Extraction Based on ORB- AKAZE for Echocardiogram View Classification
In computer vision, the extraction of robust features from images to construct models that automate image recognition and classification tasks is a prominent field of research. Handcrafted feature extraction and representation techniques become critical when dealing with limited hardware resource settings, low-quality images, and larger datasets. We propose two state-of-the-art handcrafted feature extraction techniques, Oriented FAST and Rotated BRIEF (ORB) and Accelerated KAZE (AKAZE), in combination with Bag of Visual Word (BOVW), to classify standard echocardiogram views using Machine learning (ML) algorithms. These novel approaches, ORB and AKAZE, which are rotation, scale, illumination, and noise invariant methods, outperform traditional methods. The despeckling algorithm Speckle Reduction Anisotropic Diffusion (SRAD), which is based on the Partial Differential Equation (PDE), was applied to echocardiogram images before feature extraction. Support Vector Machine (SVM), decision tree, and random forest algorithms correctly classified the feature vectors obtained from the ORB with accuracy rates of 96.5%, 76%, and 97.7%, respectively. Additionally, AKAZE\u27s SVM, decision tree, and random forest algorithms outperformed state-of-the-art techniques with accuracy rates of 97.7%, 90%, and 99%, respectively
Deep Learning for Improved Precision and Reproducibility of Left Ventricular Strain in Echocardiography: A Test-Retest Study
Aims: Assessment of left ventricular (LV) function by echocardiography is hampered by modest test-retest reproducibility. A novel artificial intelligence (AI) method based on deep learning provides fully automated measurements of LV global longitudinal strain (GLS) and may improve the clinical utility of echocardiography by reducing user-related variability. The aim of this study was to assess within-patient test-retest reproducibility of LV GLS measured by the novel AI method in repeated echocardiograms recorded by different echocardiographers and to compare the results to manual measurements.
Methods: Two test-retest data sets (n = 40 and n = 32) were obtained at separate centers. Repeated recordings were acquired in immediate succession by 2 different echocardiographers at each center. For each data set, 4 readers measured GLS in both recordings using a semiautomatic method to construct test-retest interreader and intrareader scenarios. Agreement, mean absolute difference, and minimal detectable change (MDC) were compared to analyses by AI. In a subset of 10 patients, beat-to-beat variability in 3 cardiac cycles was assessed by 2 readers and AI.
Results: Test-retest variability was lower with AI compared with interreader scenarios (data set I: MDC = 3.7 vs 5.5, mean absolute difference = 1.4 vs 2.1, respectively; data set II: MDC = 3.9 vs 5.2, mean absolute difference = 1.6 vs 1.9, respectively; all P < .05). There was bias in GLS measurements in 13 of 24 test-retest interreader scenarios (largest bias, 3.2 strain units). In contrast, there was no bias in measurements by AI. Beat-to-beat MDCs were 1.5, 2.1, and 2.3 for AI and the 2 readers, respectively. Processing time for analyses of GLS by the AI method was 7.9 ± 2.8 seconds.
Conclusion: A fast AI method for automated measurements of LV GLS reduced test-retest variability and removed bias between readers in both test-retest data sets. By improving the precision and reproducibility, AI may increase the clinical utility of echocardiography.publishedVersio
Recommended from our members
Automatic Labeling of Special Diagnostic Mammography Views from Images and DICOM Headers.
Applying state-of-the-art machine learning techniques to medical images requires a thorough selection and normalization of input data. One of such steps in digital mammography screening for breast cancer is the labeling and removal of special diagnostic views, in which diagnostic tools or magnification are applied to assist in assessment of suspicious initial findings. As a common task in medical informatics is prediction of disease and its stage, these special diagnostic views, which are only enriched among the cohort of diseased cases, will bias machine learning disease predictions. In order to automate this process, here, we develop a machine learning pipeline that utilizes both DICOM headers and images to predict such views in an automatic manner, allowing for their removal and the generation of unbiased datasets. We achieve AUC of 99.72% in predicting special mammogram views when combining both types of models. Finally, we apply these models to clean up a dataset of about 772,000 images with expected sensitivity of 99.0%. The pipeline presented in this paper can be applied to other datasets to obtain high-quality image sets suitable to train algorithms for disease detection
Self-supervised contrastive learning of echocardiogram videos enables label-efficient cardiac disease diagnosis
Advances in self-supervised learning (SSL) have shown that self-supervised
pretraining on medical imaging data can provide a strong initialization for
downstream supervised classification and segmentation. Given the difficulty of
obtaining expert labels for medical image recognition tasks, such an
"in-domain" SSL initialization is often desirable due to its improved label
efficiency over standard transfer learning. However, most efforts toward SSL of
medical imaging data are not adapted to video-based medical imaging modalities.
With this progress in mind, we developed a self-supervised contrastive learning
approach, EchoCLR, catered to echocardiogram videos with the goal of learning
strong representations for efficient fine-tuning on downstream cardiac disease
diagnosis. EchoCLR leverages (i) distinct videos of the same patient as
positive pairs for contrastive learning and (ii) a frame re-ordering pretext
task to enforce temporal coherence. When fine-tuned on small portions of
labeled data (as few as 51 exams), EchoCLR pretraining significantly improved
classification performance for left ventricular hypertrophy (LVH) and aortic
stenosis (AS) over other transfer learning and SSL approaches across internal
and external test sets. For example, when fine-tuning on 10% of available
training data (519 studies), an EchoCLR-pretrained model achieved 0.72 AUROC
(95% CI: [0.69, 0.75]) on LVH classification, compared to 0.61 AUROC (95% CI:
[0.57, 0.64]) with a standard transfer learning approach. Similarly, using 1%
of available training data (53 studies), EchoCLR pretraining achieved 0.82
AUROC (95% CI: [0.79, 0.84]) on severe AS classification, compared to 0.61
AUROC (95% CI: [0.58, 0.65]) with transfer learning. EchoCLR is unique in its
ability to learn representations of medical videos and demonstrates that SSL
can enable label-efficient disease classification from small, labeled datasets
- …