CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
Artificial intelligence for predicting survival following deceased donor liver transplantation: Retrospective multi-center study
Authors
Bong-Wan Kim
Dong-Sik Kim
+6 more
Jae-Geun Lee
Kwang-Sig Lee
Kwang-Woong Lee
Jong Man Kim
Je Ho Ryu
Young-Dong Yu
Publication date
1 September 2022
Publisher
'Turkish Surgical Association'
Abstract
© 2022 IJS Publishing Group LtdBackground: Previous studies have indicated that the model for end-stage liver disease (MELD) score may fail to predict post-transplantation patient survival. Similarly, other scores (donor MELD score, balance of risk score) that have been developed to predict transplant outcomes have not gained widespread use. These scores are typically derived using linear statistical models. This study aimed to compare the performance of traditional statistical models with machine learning approaches for predicting survival following liver transplantation. Materials and methods: Data were obtained from 785 deceased donor liver transplant recipients enrolled in the Korean Organ Transplant Registry (2014–2019). Five machine learning methods (random forest, artificial neural networks, decision tree, naïve Bayes, and support vector machine) and four traditional statistical models (Cox regression, MELD score, donor MELD score and balance of risk score) were compared to predict survival. Results: Among the machine learning methods, the random forest yielded the highest area under the receiver operating characteristic curve (AUC-ROC) values (1-month = 0.80; 3-month = 0.85; and 12-month = 0.81) for predicting survival. The AUC-ROC values of the Cox regression analysis were 0.75, 0.86, and 0.77 for 1-month, 3-month, and 12-month post-transplant survival, respectively. However, the AUC-ROC values of the MELD, donor MELD, and balance of risk scores were all below 0.70. Based on the variable importance of the random forest analysis in this study, the major predictors associated with survival were cold ischemia time, donor ICU stay, recipient weight, recipient BMI, recipient age, recipient INR, and recipient albumin level. As with the Cox regression analysis, donor ICU stay, donor bilirubin level, BAR score, and recipient albumin levels were also important factors associated with post-transplant survival in the RF model. The coefficients of these variables were also statistically significant in the Cox model (p < 0.05). The SHAP ranges for selected predictors for the 12-month survival were (−0.02,0.10) for recipient albumin, (−0.05,0.07) for donor bilirubin and (−0.02,0.25) for recipient height. Surprisingly, although not statistically significant in the Cox model, recipient weight, recipient BMI, recipient age, or recipient INR were important factors in our random forest model for predicting post-transplantation survival. Conclusion: Machine learning algorithms such as the random forest were superior to conventional Cox regression and previously reported survival scores for predicting 1-month, 3-month, and 12-month survival following liver transplantation. Therefore, artificial intelligence may have significant potential in aiding clinical decision-making during liver transplantation, including matching donors and recipients.N
Similar works
Full text
Available Versions
SNU Open Repository and Archive
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:s-space.snu.ac.kr:10371/18...
Last time updated on 29/10/2022