502 research outputs found

    A predictive model for kidney transplant graft survival using machine learning

    Full text link
    Kidney transplantation is the best treatment for end-stage renal failure patients. The predominant method used for kidney quality assessment is the Cox regression-based, kidney donor risk index. A machine learning method may provide improved prediction of transplant outcomes and help decision-making. A popular tree-based machine learning method, random forest, was trained and evaluated with the same data originally used to develop the risk index (70,242 observations from 1995-2005). The random forest successfully predicted an additional 2,148 transplants than the risk index with equal type II error rates of 10%. Predicted results were analyzed with follow-up survival outcomes up to 240 months after transplant using Kaplan-Meier analysis and confirmed that the random forest performed significantly better than the risk index (p<0.05). The random forest predicted significantly more successful and longer-surviving transplants than the risk index. Random forests and other machine learning models may improve transplant decisions.Comment: This work has been published: Pahl ES, Street WN, Johnson HJ, Reed AI. "A Predictive Model for Kidney Transplant Graft Survival Using Machine Learning." 4th International Conference on Computer Science and Information Technology (COMIT 2020), November 28-29, 2020, Dubai, UAE. ISBN: 978-1-925953-30-5. Volume 10, Number 16.10.5121/csit.2020.10160

    Machine learning techniques for arrhythmic risk stratification: a review of the literature

    Get PDF
    Ventricular arrhythmias (VAs) and sudden cardiac death (SCD) are significant adverse events that affect the morbidity and mortality of both the general population and patients with predisposing cardiovascular risk factors. Currently, conventional disease-specific scores are used for risk stratification purposes. However, these risk scores have several limitations, including variations among validation cohorts, the inclusion of a limited number of predictors while omitting important variables, as well as hidden relationships between predictors. Machine learning (ML) techniques are based on algorithms that describe intervariable relationships. Recent studies have implemented ML techniques to construct models for the prediction of fatal VAs. However, the application of ML study findings is limited by the absence of established frameworks for its implementation, in addition to clinicians’ unfamiliarity with ML techniques. This review, therefore, aims to provide an accessible and easy-to-understand summary of the existing evidence about the use of ML techniques in the prediction of VAs. Our findings suggest that ML algorithms improve arrhythmic prediction performance in different clinical settings. However, it should be emphasized that prospective studies comparing ML algorithms to conventional risk models are needed while a regulatory framework is required prior to their implementation in clinical practice

    The current and future role of artificial intelligence in optimizing donor organ utilization and recipient outcomes in heart transplantation

    Get PDF
    Heart failure (HF) is a leading cause of morbidity and mortality in the United States. While medical management and mechanical circulatory support have undergone significant advancement in recent years, orthotopic heart transplantation (OHT) remains the most definitive therapy for refractory HF. OHT has seen steady improvement in patient survival and quality of life (QoL) since its inception, with one-year mortality now under 8%. However, a significant number of HF patients are unable to receive OHT due to scarcity of donor hearts. The United Network for Organ Sharing has recently revised its organ allocation criteria in an effort to provide more equitable access to OHT. Despite these changes, there are many potential donor hearts that are inevitably rejected. Arbitrary regulations from the centers for Medicare and Medicaid services and fear of repercussions if one-year mortality falls below established values has led to a current state of excessive risk aversion for which organs are accepted for OHT. Furthermore, non-standardized utilization of extended criteria donors and donation after circulatory death, exacerbate the organ shortage. Data-driven systems can improve donor-recipient matching, better predict patient QoL post-OHT, and decrease needless organ waste through more uniform application of acceptance criteria. Thus, we propose a data-driven future for OHT and a move to patient-centric and holistic transplantation care processes

    CODUSA - Customize Optimal Donor Using Simulated Annealing In Heart Transplantation.

    Get PDF
    In heart transplantation, selection of an optimal recipient-donor match has been constrained by the lack of individualized prediction models. Here we developed a customized donor-matching model (CODUSA) for patients requiring heart transplantations, by combining simulated annealing and artificial neural networks. Using this approach, by analyzing 59,698 adult heart transplant patients, we found that donor age matching was the variable most strongly associated with long-term survival. Female hearts were given to 21% of the women and 0% of the men, and recipients with blood group B received identical matched blood group in only 18% of best-case match compared with 73% for the original match. By optimizing the donor profile, the survival could be improved with 33 months. These findings strongly suggest that the CODUSA model can improve the ability to select optimal match and avoid worst-case match in the clinical setting. This is an important step towards personalized medicine

    Using Machine Learning to Improve Personalised Prediction: A Data-Driven Approach to Segment and Stratify Populations for Healthcare

    Get PDF
    Population Health Management typically relies on subjective decisions to segment and stratify populations. This study combines unsupervised clustering for segmentation and supervised classification, personalised to clusters, for stratification. An increase in cluster homogeneity, sensitivity and positive predictive value was observed compared to an unlinked approach. This analysis demonstrates the potential for a cluster-then-predict methodology to improve and personalise decisions in healthcare systems

    Artificial Intelligence and Liver Transplant:Predicting Survival of Individual Grafts

    Get PDF
    The demand for liver transplantation far outstrips the supply of deceased donor organs, and so, listing and allocation decisions aim to maximize utility. Most existing methods for predicting transplant outcomes use basic methods, such as regression modeling, but newer artificial intelligence (AI) techniques have the potential to improve predictive accuracy. The aim was to perform a systematic review of studies predicting graft outcomes following deceased donor liver transplantation using AI techniques and to compare these findings to linear regression and standard predictive modeling: donor risk index (DRI), Model for End‐Stage Liver Disease (MELD), and Survival Outcome Following Liver Transplantation (SOFT). After reviewing available article databases, a total of 52 articles were reviewed for inclusion. Of these articles, 9 met the inclusion criteria, which reported outcomes from 18,771 liver transplants. Artificial neural networks (ANNs) were the most commonly used methodology, being reported in 7 studies. Only 2 studies directly compared machine learning (ML) techniques to liver scoring modalities (i.e., DRI, SOFT, and balance of risk [BAR]). Both studies showed better prediction of individual organ survival with the optimal ANN model, reporting an area under the receiver operating characteristic curve (AUROC) 0.82 compared with BAR (0.62) and SOFT (0.57), and the other ANN model gave an AUC ROC of 0.84 compared with a DRI (0.68) and SOFT (0.64). AI techniques can provide high accuracy in predicting graft survival based on donors and recipient variables. When compared with the standard techniques, AI methods are dynamic and are able to be trained and validated within every population. However, the high accuracy of AI may come at a cost of losing explainability (to patients and clinicians) on how the technology works
    • 

    corecore