27 research outputs found

    Use of Machine Learning Consensus Clustering to Identify Distinct Subtypes of Black Kidney Transplant Recipients and Associated Outcomes

    Get PDF
    Importance: Among kidney transplant recipients, Black patients continue to have worse graft function and reduced patient and graft survival. Better understanding of different phenotypes and subgroups of Black kidney transplant recipients may help the transplant community to identify individualized strategies to improve outcomes among these vulnerable groups. Objective: To cluster Black kidney transplant recipients in the US using an unsupervised machine learning approach. Design, Setting, and Participants: This cohort study performed consensus cluster analysis based on recipient-, donor-, and transplant-related characteristics in Black kidney transplant recipients in the US from January 1, 2015, to December 31, 2019, in the Organ Procurement and Transplantation Network/United Network for Organ Sharing database. Each cluster\u27s key characteristics were identified using the standardized mean difference, and subsequently the posttransplant outcomes were compared among the clusters. Data were analyzed from June 9 to July 17, 2021. Exposure: Machine learning consensus clustering approach. Main Outcomes and Measures: Death-censored graft failure, patient death within 3 years after kidney transplant, and allograft rejection within 1 year after kidney transplant. Results: Consensus cluster analysis was performed for 22 687 Black kidney transplant recipients (mean [SD] age, 51.4 [12.6] years; 13 635 men [60%]), and 4 distinct clusters that best represented their clinical characteristics were identified. Cluster 1 was characterized by highly sensitized recipients of deceased donor kidney retransplants; cluster 2, by recipients of living donor kidney transplants with no or short prior dialysis; cluster 3, by young recipients with hypertension and without diabetes who received young deceased donor transplants with low kidney donor profile index scores; and cluster 4, by older recipients with diabetes who received kidneys from older donors with high kidney donor profile index scores and extended criteria donors. Cluster 2 had the most favorable outcomes in terms of death-censored graft failure, patient death, and allograft rejection. Compared with cluster 2, all other clusters had a higher risk of death-censored graft failure and death. Higher risk for rejection was found in clusters 1 and 3, but not cluster 4. Conclusions and Relevance: In this cohort study using an unsupervised machine learning approach, the identification of clinically distinct clusters among Black kidney transplant recipients underscores the need for individualized care strategies to improve outcomes among vulnerable patient groups

    Pharmacological Strategies to Prevent Contrast-Induced Acute Kidney Injury

    No full text
    Contrast-induced acute kidney injury (CI-AKI) is the most common iatrogenic cause of acute kidney injury after intravenous contrast media administration. In general, the incidence of CI-AKI is low in patients with normal renal function. However, the rate is remarkably elevated in patients with preexisting chronic kidney disease, diabetes mellitus, old age, high volume of contrast agent, congestive heart failure, hypotension, anemia, use of nephrotoxic drug, and volume depletion. Consequently, CI-AKI particularly in high risk patients contributes to extended hospitalizations and increases long-term morbidity and mortality. The pathogenesis of CI-AKI involves at least three mechanisms; contrast agents induce renal vasoconstriction, increase of oxygen free radicals through oxidative stress, and direct tubular toxicity. Several strategies to prevent CI-AKI have been evaluated in experimental studies and clinical trials. At present, intravascular volume expansion with either isotonic saline or sodium bicarbonate solutions has provided more consistent positive results and was recommended in the prevention of CI-AKI. However, the proportion of patients with risk still develops CI-AKI. This review critically evaluated the current evidence for pharmacological strategies to prevent CI-AKI in patients with a risk of developing CI-AKI

    Progress and Recent Advances in Solid Organ Transplantation

    No full text
    Over the past decade, the number of organ transplants performed worldwide has significantly increased for patients with advanced organ failure [...

    Risk Factors and Predictive Model for Mortality of Hospitalized COVID-19 Elderly Patients from a Tertiary Care Hospital in Thailand

    No full text
    Background: Early detection of elderly patients with COVID-19 who are at high risk of mortality is vital for appropriate clinical decisions. We aimed to evaluate the risk factors associated with all-cause in-hospital mortality among elderly patients with COVID-19. Methods: In this retrospective study, the medical records of elderly patients aged over 60 who were hospitalized with COVID-19 at Thammasat University Hospital from 1 July to 30 September 2021 were reviewed. Multivariate logistic regression was used to identify independent predictors of mortality. The sum of weighted integers was used as a total risk score for each patient. Results: In total, 138 medical records of patients were reviewed. Four identified variables based on the odds ratio (age, respiratory rate, glomerular filtration rate and history of stroke) were assigned a weighted integer and were developed to predict mortality risk in hospitalized elderly patients. The AUROC of the scoring system were 0.9415 (95% confidence interval, 0.9033–0.9716). The optimized scoring system was developed and a risk score over 213 was considered a cut-off point for high mortality risk. Conclusions: A simple predictive risk score provides an initial assessment of mortality risk at the time of admission with a high degree of accuracy among hospitalized elderly patients with COVID-19

    Recent Advances in Understanding of Cardiovascular Diseases in Patients with Chronic Kidney Disease

    No full text
    Chronic kidney disease (CKD) is a major public health problem, affecting between 8% and 16% of the population worldwide [...

    The Association of Beta-Blocker Use and Bone Mineral Density Level in Hemodialysis Patients: A Cross-Sectional Study

    No full text
    Background and Objectives: Osteoporosis results in increasing morbidity and mortality in hemodialysis patients. The medication for treatment has been limited. There is evidence that beta-blockers could increase bone mineral density (BMD) and reduce the risk of fracture in non-dialysis patients, however, a study in hemodialysis patients has not been conducted. This study aims to determine the association between beta-blocker use and bone mineral density level in hemodialysis patients. Materials and Methods: We conducted a cross-sectional study in hemodialysis patients at Thammasat University Hospital from January 2018 to December 2020. A patient receiving a beta-blocker ≥ 20 weeks was defined as a beta-blocker user. The association between beta-blocker use and BMD levels was determined by univariate and multivariate linear regression analysis. Results: Of the 128 patients receiving hemodialysis, 71 were beta-blocker users and 57 were non-beta-blocker users (control group). The incidence of osteoporosis in hemodialysis patients was 50%. There was no significant difference in the median BMD between the control and the beta-blocker groups of the lumbar spine (0.93 vs. 0.91, p = 0.88), femoral neck (0.59 vs. 0.57, p = 0.21), total hip (0.73 vs. 0.70, p = 0.38), and 1/3 radius (0.68 vs. 0.64, p = 0.40). The univariate and multivariate linear regression analyses showed that the beta-blocker used was not associated with BMD. In the subgroup analysis, the beta-1 selective blocker used was associated with lower BMD of the femoral neck but not within the total spine, total hip, and 1/3 radius. The multivariate logistic regression showed that the factors of age ≥ 65 years (aOR 3.31 (1.25–8.80), p = 0.02), female sex (aOR 4.13 (1.68–10.14), p = 0.002), lower BMI (aOR 0.89 (0.81–0.98), p = 0.02), and ALP > 120 U/L (aOR 3.88 (1.33–11.32), p = 0.01) were independently associated with osteoporosis in hemodialysis patients. Conclusions: In hemodialysis patients, beta-blocker use was not associated with BMD levels, however a beta-1 selective blocker used was associated with lower BMD in the femoral neck

    Predictors for Unsuccessful Reductions in Hemodialysis Frequency during the Pandemic

    No full text
    Background and Objectives: Patients receiving in-center hemodialysis are at a high risk of coronavirus disease 2019 (COVID-19) infection. A reduction in hemodialysis frequency is one of the proposed measures for preventing COVID-19 infection. However, the predictors for determining an unsuccessful reduction in hemodialysis frequency are still lacking. Materials and Methods: This retrospective observational study enrolled patients who were receiving long-term thrice-weekly hemodialysis at the Thammasat University Hospital in 2021 and who decreased their dialysis frequency to twice weekly during the COVID-19 outbreak. The outcomes were to determine the predictors and a prediction model of unsuccessful reduction in dialysis frequency at 4 weeks. Bootstrapping was performed for the purposes of internal validation. Results: Of the 161 patients, 83 patients achieved a dialysis frequency reduction. Further, 33% and 82% of the patients failed to reduce their dialysis frequency at 4 and 8 weeks, respectively. The predictors for unsuccessful reduction were diabetes, congestive heart failure (CHF), pre-dialysis overhydration, set dry weight (DW), DW from bioelectrical impedance analysis, and the mean pre- and post-dialysis body weight. The final model including these predictors demonstrated an AUROC of 0.763 (95% CI 0.654–0.866) for the prediction of an unsuccessful reduction. Conclusions: The prediction score involving diabetes, CHF, pre-dialysis overhydration, DW difference, and net ultrafiltration demonstrated a good performance in predicting an unsuccessful reduction in hemodialysis frequency at 4 weeks

    Feature Importance of Acute Rejection among Black Kidney Transplant Recipients by Utilizing Random Forest Analysis: An Analysis of the UNOS Database

    No full text
    Background: Black kidney transplant recipients have worse allograft outcomes compared to White recipients. The feature importance and feature interaction network analysis framework of machine learning random forest (RF) analysis may provide an understanding of RF structures to design strategies to prevent acute rejection among Black recipients. Methods: We conducted tree-based RF feature importance of Black kidney transplant recipients in United States from 2015 to 2019 in the UNOS database using the number of nodes, accuracy decrease, gini decrease, times_a_root, p value, and mean minimal depth. Feature interaction analysis was also performed to evaluate the most frequent occurrences in the RF classification run between correlated and uncorrelated pairs. Results: A total of 22,687 Black kidney transplant recipients were eligible for analysis. Of these, 1330 (6%) had acute rejection within 1 year after kidney transplant. Important variables in the RF models for acute rejection among Black kidney transplant recipients included recipient age, ESKD etiology, PRA, cold ischemia time, donor age, HLA DR mismatch, BMI, serum albumin, degree of HLA mismatch, education level, and dialysis duration. The three most frequent interactions consisted of two numerical variables, including recipient age:donor age, recipient age:serum albumin, and recipient age:BMI, respectively. Conclusions: The application of tree-based RF feature importance and feature interaction network analysis framework identified recipient age, ESKD etiology, PRA, cold ischemia time, donor age, HLA DR mismatch, BMI, serum albumin, degree of HLA mismatch, education level, and dialysis duration as important variables in the RF models for acute rejection among Black kidney transplant recipients in the United States

    Impact of Palliative Care Services on Treatment and Resource Utilization for Hepatorenal Syndrome in the United States

    No full text
    Background: This study aimed to determine the rates of inpatient palliative care service use and assess the impact of palliative care service use on in-hospital treatments and resource utilization in hospital admissions for hepatorenal syndrome. Methods: Using the National Inpatient Sample, hospital admissions with a primary diagnosis of hepatorenal syndrome were identified from 2003 through 2014. The primary outcome of interest was the temporal trend and predictors of inpatient palliative care service use. Logistic and linear regression was performed to assess the impact of inpatient palliative care service on in-hospital treatments and resource use. Results: Of 5571 hospital admissions for hepatorenal syndrome, palliative care services were used in 748 (13.4%) admissions. There was an increasing trend in the rate of palliative care service use, from 3.3% in 2003 to 21.1% in 2014 (p < 0.001). Older age, more recent year of hospitalization, acute liver failure, alcoholic cirrhosis, and hepatocellular carcinoma were predictive of increased palliative care service use, whereas race other than Caucasian, African American, and Hispanic and chronic kidney disease were predictive of decreased palliative care service use. Although hospital admission with palliative care service use had higher mortality, palliative care service was associated with lower use of invasive mechanical ventilation, blood product transfusion, paracentesis, renal replacement, vasopressor but higher DNR status. Palliative care services reduced mean length of hospital stay and hospitalization cost. Conclusion: Although there was a substantial increase in the use of palliative care service in hospitalizations for hepatorenal syndrome, inpatient palliative care service was still underutilized. The use of palliative care service was associated with reduced resource use

    Explainable Preoperative Automated Machine Learning Prediction Model for Cardiac Surgery-Associated Acute Kidney Injury

    No full text
    Background: We aimed to develop and validate an automated machine learning (autoML) prediction model for cardiac surgery-associated acute kidney injury (CSA-AKI). Methods: Using 69 preoperative variables, we developed several models to predict post-operative AKI in adult patients undergoing cardiac surgery. Models included autoML and non-autoML types, including decision tree (DT), random forest (RF), extreme gradient boosting (XGBoost), and artificial neural network (ANN), as well as a logistic regression prediction model. We then compared model performance using area under the receiver operating characteristic curve (AUROC) and assessed model calibration using Brier score on the independent testing dataset. Results: The incidence of CSA-AKI was 36%. Stacked ensemble autoML had the highest predictive performance among autoML models, and was chosen for comparison with other non-autoML and multivariable logistic regression models. The autoML had the highest AUROC (0.79), followed by RF (0.78), XGBoost (0.77), multivariable logistic regression (0.77), ANN (0.75), and DT (0.64). The autoML had comparable AUROC with RF and outperformed the other models. The autoML was well-calibrated. The Brier score for autoML, RF, DT, XGBoost, ANN, and multivariable logistic regression was 0.18, 0.18, 0.21, 0.19, 0.19, and 0.18, respectively. We applied SHAP and LIME algorithms to our autoML prediction model to extract an explanation of the variables that drive patient-specific predictions of CSA-AKI. Conclusion: We were able to present a preoperative autoML prediction model for CSA-AKI that provided high predictive performance that was comparable to RF and superior to other ML and multivariable logistic regression models. The novel approaches of the proposed explainable preoperative autoML prediction model for CSA-AKI may guide clinicians in advancing individualized medicine plans for patients under cardiac surgery
    corecore