908 research outputs found

    Prediction of delayed graft function after kidney transplantation : comparison between logistic regression and machine learning methods

    Get PDF
    Background: Predictive models for delayed graft function (DGF) after kidney transplantation are usually developed using logistic regression. We want to evaluate the value of machine learning methods in the prediction of DGF. Methods: 497 kidney transplantations from deceased donors at the Ghent University Hospital between 2005 and 2011 are included. A feature elimination procedure is applied to determine the optimal number of features, resulting in 20 selected parameters (24 parameters after conversion to indicator parameters) out of 55 retrospectively collected parameters. Subsequently, 9 distinct types of predictive models are fitted using the reduced data set: logistic regression (LR), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), support vector machines (SVMs; using linear, radial basis function and polynomial kernels), decision tree (DT), random forest (RF), and stochastic gradient boosting (SGB). Performance of the models is assessed by computing sensitivity, positive predictive values and area under the receiver operating characteristic curve (AUROC) after 10-fold stratified cross-validation. AUROCs of the models are pairwise compared using Wilcoxon signed-rank test. Results: The observed incidence of DGF is 12.5 %. DT is not able to discriminate between recipients with and without DGF (AUROC of 52.5 %) and is inferior to the other methods. SGB, RF and polynomial SVM are mainly able to identify recipients without DGF (AUROC of 77.2, 73.9 and 79.8 %, respectively) and only outperform DT. LDA, QDA, radial SVM and LR also have the ability to identify recipients with DGF, resulting in higher discriminative capacity (AUROC of 82.2, 79.6, 83.3 and 81.7 %, respectively), which outperforms DT and RF. Linear SVM has the highest discriminative capacity (AUROC of 84.3 %), outperforming each method, except for radial SVM, polynomial SVM and LDA. However, it is the only method superior to LR. Conclusions: The discriminative capacities of LDA, linear SVM, radial SVM and LR are the only ones above 80 %. None of the pairwise AUROC comparisons between these models is statistically significant, except linear SVM outperforming LR. Additionally, the sensitivity of linear SVM to identify recipients with DGF is amongst the three highest of all models. Due to both reasons, the authors believe that linear SVM is most appropriate to predict DGF

    Autonomic care platform for optimizing query performance

    Get PDF
    Background: As the amount of information in electronic health care systems increases, data operations get more complicated and time-consuming. Intensive Care platforms require a timely processing of data retrievals to guarantee the continuous display of recent data of patients. Physicians and nurses rely on this data for their decision making. Manual optimization of query executions has become difficult to handle due to the increased amount of queries across multiple sources. Hence, a more automated management is necessary to increase the performance of database queries. The autonomic computing paradigm promises an approach in which the system adapts itself and acts as self-managing entity, thereby limiting human interventions and taking actions. Despite the usage of autonomic control loops in network and software systems, this approach has not been applied so far for health information systems. Methods: We extend the COSARA architecture, an infection surveillance and antibiotic management service platform for the Intensive Care Unit (ICU), with self-managed components to increase the performance of data retrievals. We used real-life ICU COSARA queries to analyse slow performance and measure the impact of optimizations. Each day more than 2 million COSARA queries are executed. Three control loops, which monitor the executions and take action, have been proposed: reactive, deliberative and reflective control loops. We focus on improvements of the execution time of microbiology queries directly related to the visual displays of patients' data on the bedside screens. Results: The results show that autonomic control loops are beneficial for the optimizations in the data executions in the ICU. The application of reactive control loop results in a reduction of 8.61% of the average execution time of microbiology results. The combined application of the reactive and deliberative control loop results in an average query time reduction of 10.92% and the combination of reactive, deliberative and reflective control loops provides a reduction of 13.04%. Conclusions: We found that by controlled reduction of queries' executions the performance for the end-user can be improved. The implementation of autonomic control loops in an existing health platform, COSARA, has a positive effect on the timely data visualization for the physician and nurse

    The obesity paradox in critically ill patients : a causal learning approach to a casual finding

    Get PDF
    Background While obesity confers an increased risk of death in the general population, numerous studies have reported an association between obesity and improved survival among critically ill patients. This contrary finding has been referred to as the obesity paradox. In this retrospective study, two causal inference approaches were used to address whether the survival of non-obese critically ill patients would have been improved if they had been obese. Methods The study cohort comprised 6557 adult critically ill patients hospitalized at the Intensive Care Unit of the Ghent University Hospital between 2015 and 2017. Obesity was defined as a body mass index of >= 30 kg/m(2). Two causal inference approaches were used to estimate the average effect of obesity in the non-obese (AON): a traditional approach that used regression adjustment for confounding and that assumed missingness completely at random and a robust approach that used machine learning within the targeted maximum likelihood estimation framework along with multiple imputation of missing values under the assumption of missingness at random. 1754 (26.8%) patients were discarded in the traditional approach because of at least one missing value for obesity status or confounders. Results Obesity was present in 18.9% of patients. The in-hospital mortality was 14.6% in non-obese patients and 13.5% in obese patients. The raw marginal risk difference for in-hospital mortality between obese and non-obese patients was - 1.06% (95% confidence interval (CI) - 3.23 to 1.11%,P = 0.337). The traditional approach resulted in an AON of - 2.48% (95% CI - 4.80 to - 0.15%,P = 0.037), whereas the robust approach yielded an AON of - 0.59% (95% CI - 2.77 to 1.60%,P = 0.599). Conclusions A causal inference approach that is robust to residual confounding bias due to model misspecification and selection bias due to missing (at random) data mitigates the obesity paradox observed in critically ill patients, whereas a traditional approach results in even more paradoxical findings. The robust approach does not provide evidence that the survival of non-obese critically ill patients would have been improved if they had been obese

    Urinary chitinase 3-like protein 1 for early diagnosis of acute kidney injury : a prospective cohort study in adult critically ill patients

    Get PDF
    Background: Acute kidney injury (AKI) occurs frequently and adversely affects patient and kidney outcomes, especially when its severity increases from stage 1 to stages 2 or 3. Early interventions may counteract such deterioration, but this requires early detection. Our aim was to evaluate whether the novel renal damage biomarker urinary chitinase 3-like protein 1 (UCHI3L1) can detect AKI stage >= 2 more early than serum creatinine and urine output, using the respective Kidney Disease vertical bar Improving Global Outcomes (KDIGO) criteria for definition and classification of AKI, and compare this to urinary neutrophil gelatinase-associated lipocalin (UNGAL). Methods: This was a translational single-center, prospective cohort study at the 22-bed surgical and 14-bed medical intensive care units (ICU) of Ghent University Hospital. We enrolled 181 severely ill adult patients who did not yet have AKI stage >= 2 based on the KDIGO criteria at time of enrollment. The concentration of creatinine (serum, urine) and CHI3L1 (serum, urine) was measured at least daily, and urine output hourly, in the period from enrollment till ICU discharge with a maximum of 7 ICU-days. The concentration of UNGAL was measured at enrollment. The primary endpoint was the development of AKI stage >= 2 within 12 h after enrollment. Results: After enrollment, 21 (12 %) patients developed AKI stage >= 2 within the next 7 days, with 6 (3 %) of them reaching this condition within the first 12 h. The enrollment concentration of UCHI3L1 predicted the occurrence of AKI stage >= 2 within the next 12 h with a good AUC-ROC of 0.792 (95 % CI: 0.726-0.849). This performance was similar to that of UNGAL (AUC-ROC of 0.748 (95 % CI: 0.678-0.810)). Also, the samples collected in the 24-h time frame preceding diagnosis of the 1st episode of AKI stage >= 2 had a 2.0 times higher (95 % CI: 1.3-3.1) estimated marginal mean of UCHI3L1 than controls. We further found that increasing UCHI3L1 concentrations were associated with increasing AKI severity. Conclusions: In this pilot study we found that UCHI3L1 was a good biomarker for prediction of AKI stage >= 2 in adult ICU patients
    corecore