27 research outputs found

    Fast and accurate classification of echocardiograms using deep learning

    Get PDF
    Echocardiography is essential to modern cardiology. However, human interpretation limits high throughput analysis, limiting echocardiography from reaching its full clinical and research potential for precision medicine. Deep learning is a cutting-edge machine-learning technique that has been useful in analyzing medical images but has not yet been widely applied to echocardiography, partly due to the complexity of echocardiograms' multi view, multi modality format. The essential first step toward comprehensive computer assisted echocardiographic interpretation is determining whether computers can learn to recognize standard views. To this end, we anonymized 834,267 transthoracic echocardiogram (TTE) images from 267 patients (20 to 96 years, 51 percent female, 26 percent obese) seen between 2000 and 2017 and labeled them according to standard views. Images covered a range of real world clinical variation. We built a multilayer convolutional neural network and used supervised learning to simultaneously classify 15 standard views. Eighty percent of data used was randomly chosen for training and 20 percent reserved for validation and testing on never seen echocardiograms. Using multiple images from each clip, the model classified among 12 video views with 97.8 percent overall test accuracy without overfitting. Even on single low resolution images, test accuracy among 15 views was 91.7 percent versus 70.2 to 83.5 percent for board-certified echocardiographers. Confusional matrices, occlusion experiments, and saliency mapping showed that the model finds recognizable similarities among related views and classifies using clinically relevant image features. In conclusion, deep neural networks can classify essential echocardiographic views simultaneously and with high accuracy. Our results provide a foundation for more complex deep learning assisted echocardiographic interpretation.Comment: 31 pages, 8 figure

    Feature Extraction Based on ORB- AKAZE for Echocardiogram View Classification

    Get PDF
    In computer vision, the extraction of robust features from images to construct models that automate image recognition and classification tasks is a prominent field of research. Handcrafted feature extraction and representation techniques become critical when dealing with limited hardware resource settings, low-quality images, and larger datasets. We propose two state-of-the-art handcrafted feature extraction techniques, Oriented FAST and Rotated BRIEF (ORB) and Accelerated KAZE (AKAZE), in combination with Bag of Visual Word (BOVW), to classify standard echocardiogram views using Machine learning (ML) algorithms. These novel approaches, ORB and AKAZE, which are rotation, scale, illumination, and noise invariant methods, outperform traditional methods. The despeckling algorithm Speckle Reduction Anisotropic Diffusion (SRAD), which is based on the Partial Differential Equation (PDE), was applied to echocardiogram images before feature extraction. Support Vector Machine (SVM), decision tree, and random forest algorithms correctly classified the feature vectors obtained from the ORB with accuracy rates of 96.5%, 76%, and 97.7%, respectively. Additionally, AKAZE\u27s SVM, decision tree, and random forest algorithms outperformed state-of-the-art techniques with accuracy rates of 97.7%, 90%, and 99%, respectively

    A fused deep learning architecture for viewpoint classification of echocardiography

    Get PDF
    This study extends the state of the art of deep learning convolutional neural network (CNN) to the classification of video images of echocardiography, aiming at assisting clinicians in diagnosis of heart diseases. Specifically, the architecture of neural networks is established by embracing hand-crafted features within a data-driven learning framework, incorporating both spatial and temporal information sustained by the video images of the moving heart and giving rise to two strands of two-dimensional convolutional neural network (CNN). In particular, the acceleration measurement along the time direction at each point is calculated using dense optical flow technique to represent temporal motion information. Subsequently, the fusion of both networks is conducted via linear integrations of the vectors of class scores obtained from each of the two networks. As a result, this architecture maintains the best classification results for eight viewpoint categories of echo videos with 92.1% accuracy rate whereas 89.5% is achieved using only single spatial CNN network. When concerning only three primary locations, 98% of accuracy rate is realised. In addition, comparisons with a number of well-known hand-engineered approaches are also performed, including 2D KAZE, 2D KAZE with Optical Flow, 3D KAZA, Optical Flow, 2D SIFT and 3D SIFT, which delivers accuracy rate of 89.4%, 84.3%, 87.9%, 79.4%, 83.8% and 73.8% respectively

    Automated interpretation of systolic and diastolic function on the echocardiogram:a multicohort study

    Get PDF
    Background: Echocardiography is the diagnostic modality for assessing cardiac systolic and diastolic function to diagnose and manage heart failure. However, manual interpretation of echocardiograms can be time consuming and subject to human error. Therefore, we developed a fully automated deep learning workflow to classify, segment, and annotate two-dimensional (2D) videos and Doppler modalities in echocardiograms. Methods: We developed the workflow using a training dataset of 1145 echocardiograms and an internal test set of 406 echocardiograms from the prospective heart failure research platform (Asian Network for Translational Research and Cardiovascular Trials; ATTRaCT) in Asia, with previous manual tracings by expert sonographers. We validated the workflow against manual measurements in a curated dataset from Canada (Alberta Heart Failure Etiology and Analysis Research Team; HEART; n=1029 echocardiograms), a real-world dataset from Taiwan (n=31 241), the US-based EchoNet-Dynamic dataset (n=10 030), and in an independent prospective assessment of the Asian (ATTRaCT) and Canadian (Alberta HEART) datasets (n=142) with repeated independent measurements by two expert sonographers. Findings: In the ATTRaCT test set, the automated workflow classified 2D videos and Doppler modalities with accuracies (number of correct predictions divided by the total number of predictions) ranging from 0·91 to 0·99. Segmentations of the left ventricle and left atrium were accurate, with a mean Dice similarity coefficient greater than 93% for all. In the external datasets (n=1029 to 10 030 echocardiograms used as input), automated measurements showed good agreement with locally measured values, with a mean absolute error range of 9–25 mL for left ventricular volumes, 6–10% for left ventricular ejection fraction (LVEF), and 1·8–2·2 for the ratio of the mitral inflow E wave to the tissue Doppler e' wave (E/e' ratio); and reliably classified systolic dysfunction (LVEF <40%, area under the receiver operating characteristic curve [AUC] range 0·90–0·92) and diastolic dysfunction (E/e' ratio ≥13, AUC range 0·91–0·91), with narrow 95% CIs for AUC values. Independent prospective evaluation confirmed less variance of automated compared with human expert measurements, with all individual equivalence coefficients being less than 0 for all measurements. Interpretation: Deep learning algorithms can automatically annotate 2D videos and Doppler modalities with similar accuracy to manual measurements by expert sonographers. Use of an automated workflow might accelerate access, improve quality, and reduce costs in diagnosing and managing heart failure globally. Funding: A*STAR Biomedical Research Council and A*STAR Exploit Technologies

    An improved classification approach for echocardiograms embedding temporal information

    Get PDF
    Cardiovascular disease is an umbrella term for all diseases of the heart. At present, computer-aided echocardiogram diagnosis is becoming increasingly beneficial. For echocardiography, different cardiac views can be acquired depending on the location and angulations of the ultrasound transducer. Hence, the automatic echocardiogram view classification is the first step for echocardiogram diagnosis, especially for computer-aided system and even for automatic diagnosis in the future. In addition, heart views classification makes it possible to label images especially for large-scale echo videos, provide a facility for database management and collection. This thesis presents a framework for automatic cardiac viewpoints classification of echocardiogram video data. In this research, we aim to overcome the challenges facing this investigation while analyzing, recognizing and classifying echocardiogram videos from 3D (2D spatial and 1D temporal) space. Specifically, we extend 2D KAZE approach into 3D space for feature detection and propose a histogram of acceleration as feature descriptor. Subsequently, feature encoding follows before the application of SVM to classify echo videos. In addition, comparison with the state of the art methodologies also takes place, including 2D SIFT, 3D SIFT, and optical flow technique to extract temporal information sustained in the video images. As a result, the performance of 2D KAZE, 2D KAZE with Optical Flow, 3D KAZE, Optical Flow, 2D SIFT and 3D SIFT delivers accuracy rate of 89.4%, 84.3%, 87.9%, 79.4%, 83.8% and 73.8% respectively for the eight view classes of echo videos

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
    corecore