13 research outputs found

    Interpretability of time-series deep learning models: A study in cardiovascular patients admitted to Intensive care unit

    Get PDF
    Interpretability is fundamental in healthcare problems and the lack of it in deep learning models is currently the major barrier in the usage of such powerful algorithms in the field. The study describes the implementation of an attention layer for Long Short-Term Memory (LSTM) neural network that provides a useful picture on the influence of the several input variables included in the model. A cohort of 10,616 patients with cardiovascular diseases is selected from the MIMIC III dataset, an openly available database of electronic health records (EHRs) including all patients admitted to an ICU at Boston\u2019s Medical Centre. For each patient, we consider a 10-length sequence of 1-hour windows in which 48 clinical parameters are extracted to predict the occurrence of death in the next 7 days. Inspired from the recent developments in the field of attention mechanisms for sequential data, we implement a recurrent neural network with LSTM cells incorporating an attention mechanism to identify features driving model\u2019s decisions over time. The performance of the LSTM model, measured in terms of AUC, is 0.790 (SD = 0.015). Regard our primary objective, i.e. model interpretability, we investigate the role of attention weights. We find good correspondence with driving predictors of a transparent model (r = 0.611, 95% CI [0.395, 0.763]). Moreover, most influential features identified at the cohort-level emerge as known risk factors in the clinical context. Despite the limitations of study dataset, this work brings further evidence of the potential of attention mechanisms in making deep learning model more interpretable and suggests the application of this strategy for the sequential analysis of EHRs

    Comparison of discrimination and calibration performance of ECG-based machine learning models for prediction of new-onset atrial fibrillation

    No full text
    Abstract Background Machine learning (ML) methods to build prediction models starting from electrocardiogram (ECG) signals are an emerging research field. The aim of the present study is to investigate the performances of two ML approaches based on ECGs for the prediction of new-onset atrial fibrillation (AF), in terms of discrimination, calibration and sample size dependence. Methods We trained two models to predict new-onset AF: a convolutional neural network (CNN), that takes as input the raw ECG signals, and an eXtreme Gradient Boosting model (XGB), that uses the signal’s extracted features. A penalized logistic regression model (LR) was used as a benchmark. Discrimination was evaluated with the area under the ROC curve, while calibration with the integrated calibration index. We investigated the dependence of models’ performances on the sample size and on class imbalance corrections introduced with random under-sampling. Results CNN's discrimination was the most affected by the sample size, outperforming XGB and LR only around n = 10.000 observations. Calibration showed only a small dependence on the sample size for all the models considered. Balancing the training set with random undersampling did not improve discrimination in any of the models. Instead, the main effect of imbalance corrections was to worsen the models’ calibration (for CNN, integrated calibration index from 0.014 [0.01, 0.018] to 0.17 [0.16, 0.19]). The sample size emerged as a fundamental point for developing the CNN model, especially in terms of discrimination (AUC = 0.75 [0.73, 0.77] when n = 10.000, AUC = 0.80 [0.79, 0.81] when n = 150.000). The effect of the sample size on the other two models was weaker. Imbalance corrections led to poorly calibrated models, for all the approaches considered, reducing the clinical utility of the models. Conclusions Our results suggest that the choice of approach in the analysis of ECG should be based on the amount of data available, preferring more standard models for small datasets. Moreover, imbalance correction methods should be avoided when developing clinical prediction models, where calibration is crucial

    Multi-state modelling of heart failure care path: A population-based investigation from Italy.

    Get PDF
    How different risk profiles of heart failure (HF) patients can influence multiple readmissions and outpatient management is largely unknown. We propose the application of two multi-state models in real world setting to jointly evaluate the impact of different risk factors on multiple hospital admissions, Integrated Home Care (IHC) activations, Intermediate Care Unit (ICU) admissions and death.The first model (model 1) concerns only hospitalizations as possible events and aims at detecting the determinants of repeated hospitalizations. The second model (model 2) considers both hospitalizations and ICU/IHC events and aims at evaluating which profiles are associated with transitions in intermediate care with respect to repeated hospitalizations or death. Both are characterized by transition specific covariates, adjusting for risk factors. We identified 4,904 patients (4,129 de novo and 775 worsening heart failure, WHF) hospitalized for HF from 2009 to 2014. 2,714 (55%) patients died. Advanced age and higher morbidity load increased the rate of dying and of being rehospitalized (model 1), decreased the rate of being discharged from hospital (models 1 and 2) and increased the rate of inactivation of IHC (model 2). WHF was an important risk factor associated with hospital readmission.Multi-state models enable a better identification of two patterns of HF patients. Once adjusted for age and comorbidity load, the WHF condition identifies patients who are more likely to be readmitted to hospital, but does not represent an increasing risk factor for activating ICU/IHC. This highlights different ways to manage specific patients' patterns of care. These results provide useful healthcare support to patients' management in real world context. Our study suggests that the epidemiology of the considered clinical characteristics is more nuanced than traditionally presented through a single event

    Deep-learning-based prognostic modeling for incident heart failure in patients with diabetes using electronic health records: A retrospective cohort study.

    No full text
    Patients with type 2 diabetes mellitus (T2DM) have more than twice the risk of developing heart failure (HF) compared to patients without diabetes. The present study is aimed to build an artificial intelligence (AI) prognostic model that takes in account a large and heterogeneous set of clinical factors and investigates the risk of developing HF in diabetic patients. We carried out an electronic health records- (EHR-) based retrospective cohort study that included patients with cardiological clinical evaluation and no previous diagnosis of HF. Information consists of features extracted from clinical and administrative data obtained as part of routine medical care. The primary endpoint was diagnosis of HF (during out-of-hospital clinical examination or hospitalization). We developed two prognostic models using (1) elastic net regularization for Cox proportional hazard model (COX) and (2) a deep neural network survival method (PHNN), in which a neural network was used to represent a non-linear hazard function and explainability strategies are applied to estimate the influence of predictors on the risk function. Over a median follow-up of 65 months, 17.3% of the 10,614 patients developed HF. The PHNN model outperformed COX both in terms of discrimination (c-index 0.768 vs 0.734) and calibration (2-year integrated calibration index 0.008 vs 0.018). The AI approach led to the identification of 20 predictors of different domains (age, body mass index, echocardiographic and electrocardiographic features, laboratory measurements, comorbidities, therapies) whose relationship with the predicted risk correspond to known trends in the clinical practice. Our results suggest that prognostic models for HF in diabetic patients may improve using EHRs in combination with AI techniques for survival analysis, which provide high flexibility and better performance with respect to standard approaches

    Deep-learning-based prognostic modeling for incident heart failure in patients with diabetes using electronic health records: A retrospective cohort study

    Get PDF
    Patients with type 2 diabetes mellitus (T2DM) have more than twice the risk of developing heart failure (HF) compared to patients without diabetes. The present study is aimed to build an artificial intelligence (AI) prognostic model that takes in account a large and heterogeneous set of clinical factors and investigates the risk of developing HF in diabetic patients. We carried out an electronic health records- (EHR-) based retrospective cohort study that included patients with cardiological clinical evaluation and no previous diagnosis of HF. Information consists of features extracted from clinical and administrative data obtained as part of routine medical care. The primary endpoint was diagnosis of HF (during out-of-hospital clinical examination or hospitalization). We developed two prognostic models using (1) elastic net regularization for Cox proportional hazard model (COX) and (2) a deep neural network survival method (PHNN), in which a neural network was used to represent a non-linear hazard function and explainability strategies are applied to estimate the influence of predictors on the risk function. Over a median follow-up of 65 months, 17.3% of the 10,614 patients developed HF. The PHNN model outperformed COX both in terms of discrimination (c-index 0.768 vs 0.734) and calibration (2-year integrated calibration index 0.008 vs 0.018). The AI approach led to the identification of 20 predictors of different domains (age, body mass index, echocardiographic and electrocardiographic features, laboratory measurements, comorbidities, therapies) whose relationship with the predicted risk correspond to known trends in the clinical practice. Our results suggest that prognostic models for HF in diabetic patients may improve using EHRs in combination with AI techniques for survival analysis, which provide high flexibility and better performance with respect to standard approaches

    HF progression among outpatients with HF in a community setting

    No full text
    Background: Incidence and prognostic impact of heart failure (HF) progression has been not well addressed. Methods: From 2009 until 2015, consecutive ambulatory HF patientswere recruited. HF progressionwas defined by the presence of at least two of the following criteria: step up of 651 New York Heart Association (NYHA) class; decrease LVEF 65 10 points; association of diuretics or increase 65 50% of furosemide dosage, or HF hospitalization. Results: 2528 met study criteria (mean age 76; 42% women). Of these, 48% had ischemic heart disease, 18% patients with LVEF 64 35%. During a median follow-up of 2.4 years, overall mortality was 31% (95% CI: 29%\u201333%), whereas rate of HF progression or death was 57% (95% CI: 55%\u201359%). The 4-year incidence of HF progression was 39% (95% CI: 37%\u201341%) whereas the competing mortality rate was 18% (95% CI: 16%\u201319%). Rates of HF progression and death were higher in HF patients with LVEF 64 35% vs N35% (HF progression: 42% vs 38%, p=0.012; death as a competing risk: 22% vs 17%, p = 0.002). HF progression identified HF patients with a worse survival (HR = 3.16, 95% CI: 2.75\u20133.72). In cause-specific Cox models, age, previous HF hospitalization, chronic obstructive pulmonary disease, chronic kidney disease, anemia, sex, LVEF 64 35% emerged as prognostic factors of HF progression. Conclusions: Among outpatients with HF, at 4 years 39% presented a HF progression, while 18% died before any sign of HF progression. This trend was higher in patients with LVEF 64 35%. These findings may have implications for healthcare planning and resource allocation

    Diagram of model 1.

    No full text
    <p>The first five hospitalizations are considered. “In” stands for admission in hospital, “Out” for discharge from hospital and “D” for death. Patients with first admission for HF are considered. No distinction has been done between rehospitalization for heart failure or for any cause.</p

    Diagram of model 2.

    No full text
    <p>The state space is made by all the possible events described in the dataset: admission to hospital (H), to ICU or IHC, discharge from any state (OUT) and death.</p
    corecore