1,961 research outputs found

    An improved classification approach for echocardiograms embedding temporal information

    Get PDF
    Cardiovascular disease is an umbrella term for all diseases of the heart. At present, computer-aided echocardiogram diagnosis is becoming increasingly beneficial. For echocardiography, different cardiac views can be acquired depending on the location and angulations of the ultrasound transducer. Hence, the automatic echocardiogram view classification is the first step for echocardiogram diagnosis, especially for computer-aided system and even for automatic diagnosis in the future. In addition, heart views classification makes it possible to label images especially for large-scale echo videos, provide a facility for database management and collection. This thesis presents a framework for automatic cardiac viewpoints classification of echocardiogram video data. In this research, we aim to overcome the challenges facing this investigation while analyzing, recognizing and classifying echocardiogram videos from 3D (2D spatial and 1D temporal) space. Specifically, we extend 2D KAZE approach into 3D space for feature detection and propose a histogram of acceleration as feature descriptor. Subsequently, feature encoding follows before the application of SVM to classify echo videos. In addition, comparison with the state of the art methodologies also takes place, including 2D SIFT, 3D SIFT, and optical flow technique to extract temporal information sustained in the video images. As a result, the performance of 2D KAZE, 2D KAZE with Optical Flow, 3D KAZE, Optical Flow, 2D SIFT and 3D SIFT delivers accuracy rate of 89.4%, 84.3%, 87.9%, 79.4%, 83.8% and 73.8% respectively for the eight view classes of echo videos

    Fast and accurate classification of echocardiograms using deep learning

    Get PDF
    Echocardiography is essential to modern cardiology. However, human interpretation limits high throughput analysis, limiting echocardiography from reaching its full clinical and research potential for precision medicine. Deep learning is a cutting-edge machine-learning technique that has been useful in analyzing medical images but has not yet been widely applied to echocardiography, partly due to the complexity of echocardiograms' multi view, multi modality format. The essential first step toward comprehensive computer assisted echocardiographic interpretation is determining whether computers can learn to recognize standard views. To this end, we anonymized 834,267 transthoracic echocardiogram (TTE) images from 267 patients (20 to 96 years, 51 percent female, 26 percent obese) seen between 2000 and 2017 and labeled them according to standard views. Images covered a range of real world clinical variation. We built a multilayer convolutional neural network and used supervised learning to simultaneously classify 15 standard views. Eighty percent of data used was randomly chosen for training and 20 percent reserved for validation and testing on never seen echocardiograms. Using multiple images from each clip, the model classified among 12 video views with 97.8 percent overall test accuracy without overfitting. Even on single low resolution images, test accuracy among 15 views was 91.7 percent versus 70.2 to 83.5 percent for board-certified echocardiographers. Confusional matrices, occlusion experiments, and saliency mapping showed that the model finds recognizable similarities among related views and classifies using clinically relevant image features. In conclusion, deep neural networks can classify essential echocardiographic views simultaneously and with high accuracy. Our results provide a foundation for more complex deep learning assisted echocardiographic interpretation.Comment: 31 pages, 8 figure

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    The application of KAZE features to the classification echocardiogram videos

    Get PDF
    In the computer vision field, both approaches of SIFT and SURF are prevalent in the extraction of scale-invariant points and have demonstrated a number of advantages. However, when they are applied to medical images with relevant low contrast between target structures and surrounding regions, these approaches lack the ability to distinguish salient features. Therefore, this research proposes a different approach by extracting feature points using the emerging method of KAZE. As such, to categorise a collection of video images of echocardiograms, KAZE feature points, coupled with three popular representation methods, are addressed in this paper, which includes the bag of words (BOW), sparse coding, and Fisher vector (FV). In comparison with the SIFT features represented using Sparse coding approach that gives 72% overall performance on the classification of eight viewpoints, KAZE feature integrated with either BOW, sparse coding or FV improves the performance significantly with the accuracy being 81.09%, 78.85% and 80.8% respectively. When it comes to distinguish only three primary view locations, 97.44% accuracy can be achieved when employing the approach of KAZE whereas 90% accuracy is realised while applying SIFT features

    Automated interpretation of systolic and diastolic function on the echocardiogram:a multicohort study

    Get PDF
    Background: Echocardiography is the diagnostic modality for assessing cardiac systolic and diastolic function to diagnose and manage heart failure. However, manual interpretation of echocardiograms can be time consuming and subject to human error. Therefore, we developed a fully automated deep learning workflow to classify, segment, and annotate two-dimensional (2D) videos and Doppler modalities in echocardiograms. Methods: We developed the workflow using a training dataset of 1145 echocardiograms and an internal test set of 406 echocardiograms from the prospective heart failure research platform (Asian Network for Translational Research and Cardiovascular Trials; ATTRaCT) in Asia, with previous manual tracings by expert sonographers. We validated the workflow against manual measurements in a curated dataset from Canada (Alberta Heart Failure Etiology and Analysis Research Team; HEART; n=1029 echocardiograms), a real-world dataset from Taiwan (n=31 241), the US-based EchoNet-Dynamic dataset (n=10 030), and in an independent prospective assessment of the Asian (ATTRaCT) and Canadian (Alberta HEART) datasets (n=142) with repeated independent measurements by two expert sonographers. Findings: In the ATTRaCT test set, the automated workflow classified 2D videos and Doppler modalities with accuracies (number of correct predictions divided by the total number of predictions) ranging from 0·91 to 0·99. Segmentations of the left ventricle and left atrium were accurate, with a mean Dice similarity coefficient greater than 93% for all. In the external datasets (n=1029 to 10 030 echocardiograms used as input), automated measurements showed good agreement with locally measured values, with a mean absolute error range of 9–25 mL for left ventricular volumes, 6–10% for left ventricular ejection fraction (LVEF), and 1·8–2·2 for the ratio of the mitral inflow E wave to the tissue Doppler e' wave (E/e' ratio); and reliably classified systolic dysfunction (LVEF <40%, area under the receiver operating characteristic curve [AUC] range 0·90–0·92) and diastolic dysfunction (E/e' ratio ≥13, AUC range 0·91–0·91), with narrow 95% CIs for AUC values. Independent prospective evaluation confirmed less variance of automated compared with human expert measurements, with all individual equivalence coefficients being less than 0 for all measurements. Interpretation: Deep learning algorithms can automatically annotate 2D videos and Doppler modalities with similar accuracy to manual measurements by expert sonographers. Use of an automated workflow might accelerate access, improve quality, and reduce costs in diagnosing and managing heart failure globally. Funding: A*STAR Biomedical Research Council and A*STAR Exploit Technologies

    The application of KAZE features to the classification echocardiogram videos

    Get PDF
    In the computer vision field, both approaches of SIFT and SURF are prevalent in the extraction of scale-invariant points and have demonstrated a number of advantages. However, when they are applied to medical images with relevant low contrast between target structures and surrounding regions, these approaches lack the ability to distinguish salient features. Therefore, this research proposes a different approach by extracting feature points using the emerging method of KAZE. As such, to categorise a collection of video images of echocardiograms, KAZE feature points, coupled with three popular representation methods, are addressed in this paper, which includes the bag of words (BOW), sparse coding, and Fisher vector (FV). In comparison with the SIFT features represented using Sparse coding approach that gives 72% overall performance on the classification of eight viewpoints, KAZE feature integrated with either BOW, sparse coding or FV improves the performance significantly with the accuracy being 81.09%, 78.85% and 80.8% respectively. When it comes to distinguish only three primary view locations, 97.44% accuracy can be achieved when employing the approach of KAZE whereas 90% accuracy is realised while applying SIFT features

    Segmentation of heart chambers in 2-D heart ultrasounds with deep learning

    Get PDF
    Echocardiography is a non-invasive image diagnosis technique where ultrasound waves are used to obtain an image or sequence of the structure and function of the heart. The segmentation of the heart chambers on ultrasound images is a task usually performed by experienced cardiologists, in which they delineate and extract the shape of both atriums and ventricles to obtain important indexes of a patient’s heart condition. However, this task is usually hard to perform accurately due to the poor image quality caused by the equipment and techniques used and due to the variability across different patients and pathologies. Therefore, medical image processing is needed in this particular case to avoid inaccuracy and obtain proper results. Over the last decade, several studies have proved that deep learning techniques are a possible solution to this problem, obtaining good results in automatic segmentation. The major problem with deep learning techniques in medical image processing is the lack of available data to train and test these architectures. In this work we have trained, validated, and tested a convolutional neural network based on the architecture of U-Net for 2D echocardiogram chamber segmentation. The data used for the training of the convolutional neural network was the B-Mode 4-chamber apical view Echogan dataset with data augmentation techniques applied. The novelty of this work is the hyperparameter and architecture optimizations to reduce the computation time while obtaining significant training and testing accuraciesObjectius de Desenvolupament Sostenible::3 - Salut i Benesta
    • …
    corecore