2 research outputs found

    EchoGLAD: Hierarchical Graph Neural Networks for Left Ventricle Landmark Detection on Echocardiograms

    Full text link
    The functional assessment of the left ventricle chamber of the heart requires detecting four landmark locations and measuring the internal dimension of the left ventricle and the approximate mass of the surrounding muscle. The key challenge of automating this task with machine learning is the sparsity of clinical labels, i.e., only a few landmark pixels in a high-dimensional image are annotated, leading many prior works to heavily rely on isotropic label smoothing. However, such a label smoothing strategy ignores the anatomical information of the image and induces some bias. To address this challenge, we introduce an echocardiogram-based, hierarchical graph neural network (GNN) for left ventricle landmark detection (EchoGLAD). Our main contributions are: 1) a hierarchical graph representation learning framework for multi-resolution landmark detection via GNNs; 2) induced hierarchical supervision at different levels of granularity using a multi-level loss. We evaluate our model on a public and a private dataset under the in-distribution (ID) and out-of-distribution (OOD) settings. For the ID setting, we achieve the state-of-the-art mean absolute errors (MAEs) of 1.46 mm and 1.86 mm on the two datasets. Our model also shows better OOD generalization than prior works with a testing MAE of 4.3 mm.Comment: To be published in MICCAI 202

    Automated Atrial Fibrillation Diagnosis by Echocardiography without ECG: Accuracy and Applications of a New Deep Learning Approach

    No full text
    Background: Automated rhythm detection on echocardiography through artificial intelligence (AI) has yet to be fully realized. We propose an AI model trained to identify atrial fibrillation (AF) using apical 4-chamber (AP4) cines without requiring electrocardiogram (ECG) data. Methods: Transthoracic echocardiography studies of consecutive patients ≥ 18 years old at our tertiary care centre were retrospectively reviewed for AF and sinus rhythm. The study was first interpreted by level III-trained echocardiography cardiologists as the gold standard for rhythm diagnosis based on ECG rhythm strip and imaging assessment, which was also verified with a 12-lead ECG around the time of the study. AP4 cines with three cardiac cycles were then extracted from these studies with the rhythm strip and Doppler information removed and introduced to the deep learning model ResNet(2+1)D with an 80:10:10 training–validation–test split ratio. Results: 634 patient studies (1205 cines) were included. After training, the AI model achieved high accuracy on validation for detection of both AF and sinus rhythm (mean F1-score = 0.92; AUROC = 0.95). Performance was consistent on the test dataset (mean F1-score = 0.94, AUROC = 0.98) when using the cardiologist’s assessment of the ECG rhythm strip as the gold standard, who had access to the full study and external ECG data, while the AI model did not. Conclusions: AF detection by AI on echocardiography without ECG appears accurate when compared to an echocardiography cardiologist’s assessment of the ECG rhythm strip as the gold standard. This has potential clinical implications in point-of-care ultrasound and stroke risk stratification
    corecore