3 research outputs found
Electrocardiographic Deep Learning for Predicting Post-Procedural Mortality
Background. Pre-operative risk assessments used in clinical practice are
limited in their ability to identify risk for post-operative mortality. We
hypothesize that electrocardiograms contain hidden risk markers that can help
prognosticate post-operative mortality. Methods. In a derivation cohort of
45,969 pre-operative patients (age 59+- 19 years, 55 percent women), a deep
learning algorithm was developed to leverage waveform signals from
pre-operative ECGs to discriminate post-operative mortality. Model performance
was assessed in a holdout internal test dataset and in two external hospital
cohorts and compared with the Revised Cardiac Risk Index (RCRI) score. Results.
In the derivation cohort, there were 1,452 deaths. The algorithm discriminates
mortality with an AUC of 0.83 (95% CI 0.79-0.87) surpassing the discrimination
of the RCRI score with an AUC of 0.67 (CI 0.61-0.72) in the held out test
cohort. Patients determined to be high risk by the deep learning model's risk
prediction had an unadjusted odds ratio (OR) of 8.83 (5.57-13.20) for
post-operative mortality as compared to an unadjusted OR of 2.08 (CI 0.77-3.50)
for post-operative mortality for RCRI greater than 2. The deep learning
algorithm performed similarly for patients undergoing cardiac surgery with an
AUC of 0.85 (CI 0.77-0.92), non-cardiac surgery with an AUC of 0.83
(0.79-0.88), and catherization or endoscopy suite procedures with an AUC of
0.76 (0.72-0.81). The algorithm similarly discriminated risk for mortality in
two separate external validation cohorts from independent healthcare systems
with AUCs of 0.79 (0.75-0.83) and 0.75 (0.74-0.76) respectively. Conclusion.
The findings demonstrate how a novel deep learning algorithm, applied to
pre-operative ECGs, can improve discrimination of post-operative mortality
Deep learning-based electrocardiographic screening for chronic kidney disease
Abstract Background Undiagnosed chronic kidney disease (CKD) is a common and usually asymptomatic disorder that causes a high burden of morbidity and early mortality worldwide. We developed a deep learning model for CKD screening from routinely acquired ECGs. Methods We collected data from a primary cohort with 111,370 patients which had 247,655 ECGs between 2005 and 2019. Using this data, we developed, trained, validated, and tested a deep learning model to predict whether an ECG was taken within one year of the patient receiving a CKD diagnosis. The model was additionally validated using an external cohort from another healthcare system which had 312,145 patients with 896,620 ECGs between 2005 and 2018. Results Using 12-lead ECG waveforms, our deep learning algorithm achieves discrimination for CKD of any stage with an AUC of 0.767 (95% CI 0.760–0.773) in a held-out test set and an AUC of 0.709 (0.708–0.710) in the external cohort. Our 12-lead ECG-based model performance is consistent across the severity of CKD, with an AUC of 0.753 (0.735–0.770) for mild CKD, AUC of 0.759 (0.750–0.767) for moderate-severe CKD, and an AUC of 0.783 (0.773–0.793) for ESRD. In patients under 60 years old, our model achieves high performance in detecting any stage CKD with both 12-lead (AUC 0.843 [0.836–0.852]) and 1-lead ECG waveform (0.824 [0.815–0.832]). Conclusions Our deep learning algorithm is able to detect CKD using ECG waveforms, with stronger performance in younger patients and more severe CKD stages. This ECG algorithm has the potential to augment screening for CKD
Recommended from our members
Blinded, randomized trial of sonographer versus AI cardiac function assessment
Artificial intelligence (AI) has been developed for echocardiography1-3, although it has not yet been tested with blinding and randomization. Here we designed a blinded, randomized non-inferiority clinical trial (ClinicalTrials.gov ID: NCT05140642; no outside funding) of AI versus sonographer initial assessment of left ventricular ejection fraction (LVEF) to evaluate the impact of AI in the interpretation workflow. The primary end point was the change in the LVEF between initial AI or sonographer assessment and final cardiologist assessment, evaluated by the proportion of studies with substantial change (more than 5% change). From 3,769 echocardiographic studies screened, 274 studies were excluded owing to poor image quality. The proportion of studies substantially changed was 16.8% in the AI group and 27.2% in the sonographer group (difference of -10.4%, 95% confidence interval: -13.2% to -7.7%, P < 0.001 for non-inferiority, P < 0.001 for superiority). The mean absolute difference between final cardiologist assessment and independent previous cardiologist assessment was 6.29% in the AI group and 7.23% in the sonographer group (difference of -0.96%, 95% confidence interval: -1.34% to -0.54%, P < 0.001 for superiority). The AI-guided workflow saved time for both sonographers and cardiologists, and cardiologists were not able to distinguish between the initial assessments by AI versus the sonographer (blinding index of 0.088). For patients undergoing echocardiographic quantification of cardiac function, initial assessment of LVEF by AI was non-inferior to assessment by sonographers