433 research outputs found

    Artificial Intelligence and Liver Transplant:Predicting Survival of Individual Grafts

    Get PDF
    The demand for liver transplantation far outstrips the supply of deceased donor organs, and so, listing and allocation decisions aim to maximize utility. Most existing methods for predicting transplant outcomes use basic methods, such as regression modeling, but newer artificial intelligence (AI) techniques have the potential to improve predictive accuracy. The aim was to perform a systematic review of studies predicting graft outcomes following deceased donor liver transplantation using AI techniques and to compare these findings to linear regression and standard predictive modeling: donor risk index (DRI), Model for End‐Stage Liver Disease (MELD), and Survival Outcome Following Liver Transplantation (SOFT). After reviewing available article databases, a total of 52 articles were reviewed for inclusion. Of these articles, 9 met the inclusion criteria, which reported outcomes from 18,771 liver transplants. Artificial neural networks (ANNs) were the most commonly used methodology, being reported in 7 studies. Only 2 studies directly compared machine learning (ML) techniques to liver scoring modalities (i.e., DRI, SOFT, and balance of risk [BAR]). Both studies showed better prediction of individual organ survival with the optimal ANN model, reporting an area under the receiver operating characteristic curve (AUROC) 0.82 compared with BAR (0.62) and SOFT (0.57), and the other ANN model gave an AUC ROC of 0.84 compared with a DRI (0.68) and SOFT (0.64). AI techniques can provide high accuracy in predicting graft survival based on donors and recipient variables. When compared with the standard techniques, AI methods are dynamic and are able to be trained and validated within every population. However, the high accuracy of AI may come at a cost of losing explainability (to patients and clinicians) on how the technology works

    Longterm Outcomes of Patients Undergoing Liver Transplantation for Acute-on-Chronic Liver Failure

    Get PDF
    AIMS: Recent data have demonstrated greater than 80% one-year survival probability after liver transplantation (LT) for patients with severe acute on chronic liver failure (ACLF). However, long term outcomes and complications are still unknown for this population. Our aim was to compare long-term patient and graft survival among patients transplanted across all grades of ACLF. METHODS: We analyzed the UNOS database, years 2004-2017. Patients with ACLF were identified using the EASL-CLIF criteria. Kaplan-Meier and Cox regression methods were used to determine patient and graft survival and associated predictors of mortality in adjusted models. RESULTS: A total of 75,844 patients were transplanted of which 48,854 (64.4%) had no ACLF, 9,337 (12.3%) had ACLF-1, 9,386 (12.4%) had ACLF-2 and 8,267 (10.9%) had ACLF-3. Patients transplanted without ACLF had a greater proportion of hepatocellular carcinoma within (23.8%) and outside (12.7%) Milan criteria. Five-year patient survival after LT was lower in the ACLF-3 patients compared with the other groups (67.7%, p<0.001), although after year 1, the percentage decrease in survival was similar among all groups. Infection was the primary cause of death among all patient groups in the first year. After the first year, infection was the main cause of death in patients transplanted with ACLF-1 (31.1%), ACLF-2 (33.3%) and ACLF-3 (36.7%), whereas malignancy was the predominant cause of death in those transplanted with no ACLF (38.5%). Graft survival probability at 5 years was above 90% among all patient groups. CONCLUSION: Patients transplanted with ACLF-3 have lower 5-year survival as compared to ACLF 0-2 but mortality rates were not significantly different after the first year following LT. Graft survival was excellent across all ACLF groups

    Racial and ethnic disparities in access to liver transplantation

    Full text link
    Access to liver transplantation is reportedly inequitable for racial/ethnic minorities, but inadequate adjustments for geography and disease progression preclude any meaningful conclusions. We aimed to evaluate the association between candidate race/ethnicity and liver transplant rates after thorough adjustments for these factors and to determine how uniform racial/ethnic disparities were across Model for End-Stage Liver Disease (MELD) scores. Chronic end-stage liver disease candidates initially wait-listed between February 28, 2002 and February 27, 2007 were identified from Scientific Registry for Transplant Recipients data. The primary outcome was deceased donor liver transplantation (DDLT); the primary exposure covariate was race/ethnicity (white, African American, Hispanic, Asian, and other). Cox regression was used to estimate the covariate-adjusted DDLT rates by race/ethnicity, which were stratified by the donation service area and MELD score. With averaging across all MELD scores, African Americans, Asians, and others had similar adjusted DDLT rates in comparison with whites. However, Hispanics had an 8% lower DDLT rate versus whites [hazard ratio (HR) = 0.92, P = 0.011]. The disparity among Hispanics was concentrated among patients with MELD scores < 20, with HR = 0.84 ( P = 0.021) for MELD scores of 6 to 14 and HR = 0.85 ( P = 0.009) for MELD scores of 15 to 19. Asians with MELD scores < 15 had a 24% higher DDLT rate with respect to whites (HR = 1.24, P = 0.024). However, Asians with MELD scores of 30 to 40 had a 46% lower DDLT rate (HR = 0.54, P = 0.004). In conclusion, although African Americans did not have significantly different DDLT rates in comparison with similar white candidates, race/ethnicity-based disparities were prominent among subgroups of Hispanic and Asian candidates. By precluding the survival benefit of liver transplantation, this inequity may lead to excess mortality for minority candidates. Liver Transpl 16:1033–1040, 2010. © 2010 AASLD.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/78074/1/22108_ftp.pd

    Geographic Variation in End-Stage Renal Disease Incidence and Access to Deceased Donor Kidney Transplantation

    Full text link
    The effect of demand for kidney transplantation, measured by end-stage renal disease (ESRD) incidence, on access to transplantation is unknown. Using data from the U.S. Census Bureau, Centers for Medicare & Medicaid Services (CMS) and the Organ Procurement and Transplantation Network/Scientific Registry of Transplant Recipients (OPTN/SRTR) from 2000 to 2008, we performed donation service area (DSA) and patient-level regression analyses to assess the effect of ESRD incidence on access to the kidney waiting list and deceased donor kidney transplantation. In DSAs, ESRD incidence increased with greater density of high ESRD incidence racial groups (African Americans and Native Americans). Wait-list and transplant rates were relatively lower in high ESRD incidence DSAs, but wait-list rates were not drastically affected by ESRD incidence at the patient level. Compared to low ESRD areas, high ESRD areas were associated with lower adjusted transplant rates among all ESRD patients (RR 0.68, 95% CI 0.66–0.70). Patients living in medium and high ESRD areas had lower transplant rates from the waiting list compared to those in low ESRD areas (medium: RR 0.68, 95% CI 0.66–0.69; high: RR 0.63, 95% CI 0.61–0.65). Geographic variation in access to kidney transplant is in part mediated by local ESRD incidence, which has implications for allocation policy development.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/79376/1/j.1600-6143.2010.03043.x.pd

    Kidney Transplantation in Sensitized Recipients; A Single Center Experience

    Get PDF
    A successful transplantation, across a positive crossmatch barrier, is one of the most persistent long-standing problems in the field of kidney transplant medicine. The aim of this study was to describe seven consecutive living renal transplantations in recipients with positive crossmatch for donors or positive for donor specific antibodies (DSAs). A preconditioning regimen including plasmapheresis and intravenous immunoglobulin was delivered three times a week until the crossmatch and/or DSAs became negative. Mycophenolate mofetil and tacrolimus were started two days before the plasmapheresis. The protocol was modified to include administration of anti-CD 20 antibody (rituximab, 375 mg/m2) from the patient number 3 through the patient number 7. All seven patients achieved negative conversion of the crossmatch or DSAs, and the kidney transplantations were successfully performed in all cases. Acute cellular rejection occurred in two patients, which were subclinical and controlled with high dose steroid treatment. Antibody-mediated rejection occurred in one patient, which was easily reversed with plasmapheresis. All recipients attained normal graft function during the 7-24 months of follow up. Our study suggests that sensitized patients can be transplanted successfully with desensitization pretreatment

    Urinary-Cell mRNA Profile and Acute Cellular Rejection in Kidney Allografts

    Get PDF
    Background—The standard test for the diagnosis of acute rejection in kidney transplants is the renal biopsy. Noninvasive tests would be preferable. Methods—We prospectively collected 4300 urine specimens from 485 kidney-graft recipients from day 3 through month 12 after transplantation. Messenger RNA (mRNA) levels were measured in urinary cells and correlated with allograft-rejection status with the use of logistic regression. Results—A three-gene signature of 18S ribosomal (rRNA)–normalized measures of CD3Δ mRNA and interferon-inducible protein 10 (IP-10) mRNA, and 18S rRNA discriminated between biopsy specimens showing acute cellular rejection and those not showing rejection (area under the curve [AUC], 0.85; 95% confidence interval [CI], 0.78 to 0.91; P<0.001 by receiver-operatingcharacteristic curve analysis). The cross-validation estimate of the AUC was 0.83 by bootstrap resampling, and the Hosmer–Lemeshow test indicated good fit (P = 0.77). In an externalvalidation data set, the AUC was 0.74 (95% CI, 0.61 to 0.86; P<0.001) and did not differ significantly from the AUC in our primary data set (P = 0.13). The signature distinguished acute cellular rejection from acute antibody-mediated rejection and borderline rejection (AUC, 0.78; 95% CI, 0.68 to 0.89; P<0.001). It also distinguished patients who received anti–interleukin-2 receptor antibodies from those who received T-cell–depleting antibodies (P<0.001) and was diagnostic of acute cellular rejection in both groups. Urinary tract infection did not affect the signature (P = 0.69). The average trajectory of the signature in repeated urine samples remained below the diagnostic threshold for acute cellular rejection in the group of patients with no rejection, but in the group with rejection, there was a sharp rise during the weeks before the biopsy showing rejection (P<0.001). Conclusions—A molecular signature of CD3Δ mRNA, IP-10 mRNA, and 18S rRNA levels in urinary cells appears to be diagnostic and prognostic of acute cellular rejection in kidney allografts

    Evaluating compulsory minimum volume standards in Germany: how many hospitals were compliant in 2004?

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Minimum hospital procedure volumes are discussed as an instrument for quality assurance. In 2004 Germany introduced such annual minimum volumes nationwide on five surgical procedures: kidney, liver, stem cell transplantation, complex oesophageal, and pancreatic interventions. The present investigation is the first part of a study evaluating the effects of these minimum volumes on health care provision. Research questions address how many hospitals and cases were affected by minimum volume regulations in 2004, how affected hospitals were distributed according to minimum volumes, and how many hospitals within the 16 German states complied with the standards set for 2004.</p> <p>Methods</p> <p>The evaluation is based on the mandatory hospital quality reports for 2004. In the reports, all hospitals are statutorily obliged to state the number of procedures performed for each minimum volume. The data were analyzed descriptively.</p> <p>Results</p> <p>In 2004, 485 out of 1710 German hospitals providing acute care and approximately 0.14% of all hospital cases were affected by minimum volume regulations. Liver, kidney, and stem cell transplantation affected from 23 to hospitals; complex oesophageal and pancreatic interventions affected from 297 to 455 hospitals. The inter-state comparison of the average hospital care area demonstrates large differences between city states and large area states and the eastern and western German states ranging from a minimum 51 km<sup>2 </sup>up to a maximum 23.200 km<sup>2</sup>, varying according to each procedure. A range of 9% – 16% of the transplantation hospitals did not comply with the standards affecting 1% – 2% of the patients whereas 29% and 18% of the hospitals treating complex oesophageal and pancreatic interventions failed the standards affecting 2% – 5% of the prevailing cases.</p> <p>Conclusion</p> <p>In 2004, the newly introduced minimum volume regulations affected only up to a quarter of German acute care hospitals and few cases. However, excluding the hospitals not meeting the minimum volume standards from providing the respective procedures deserves considering two aspects: the hospital health care provision concepts by the German states as being responsible and from a patient perspective the geographically equal access to hospital care.</p

    Surgically-placed abdominal wall catheters on postoperative analgesia and outcomes after living liver donation

    Get PDF
    Living donor liver resections are associated with significant postoperative pain. Epidural analgesia is the gold standard for postoperative pain management, although it is often refused or contraindicated. Surgically placed abdominal wall catheters (AWCs) are a novel pain modality that can potentially provide pain relief for those patients who are unable to receive an epidural. A retrospective review was performed at a single center. Patients were categorized according to their postoperative pain modality: intravenous (IV) patient-controlled analgesia (PCA), AWCs with IV PCA, or patient-controlled epidural analgesia (PCEA). Pain scores, opioid consumption, and outcomes were compared for the first 3 postoperative days. Propensity score matches (PSMs) were performed to adjust for covariates and to confirm the primary analysis. The AWC group had significantly lower mean morphine-equivalent consumption on postoperative day 3 [18.1 mg, standard error (SE) 5 3.1 versus 28.2 mg, SE 5 3.0; P 5 0.02] and mean cumulative morphine-equivalent consumption (97.2 mg, SE 5 7.2 versus 121.0 mg, SE 5 9.1; P 5 0.04) in comparison with the IV PCA group; the difference in cumulative-morphine equivalent remained significant in the PSMs. AWC pain scores were higher than those in the PCEA group and were similar to the those in the IV PCA group. The AWC group had a lower incidence of pruritus and a shorter hospital stay in comparison with the PCEA group and had a lower incidence of sedation in comparison with both groups. Time to ambulation, nausea, and vomiting were comparable among all 3 groups. The PSMs confirmed all results except for a decrease in the length of stay in comparison with PCEA. AWCs may be an alternative to epidural analgesia after living donor liver resections. Randomized trials are needed to verify the benefits of AWCs, including the safety and adverse effects.James Khan is supported by a masters award from the Canadian Institute of Health Research and a fellowship grant from the Michael G. DeGroote Institute of Pain Research and Care. Hance Clarke is supported by a merit award from the Department of Anesthesia at the University of Toronto and by the Strategic Training for Advanced Genetic Epidemiology Program in Genetic Epidemiology at the Canadian Institute of Health Research

    Public appraisal of government efforts and participation intent in medico-ethical policymaking in Japan: a large scale national survey concerning brain death and organ transplant

    Get PDF
    BACKGROUND: Public satisfaction with policy process influences the legitimacy and acceptance of policies, and conditions the future political process, especially when contending ethical value judgments are involved. On the other hand, public involvement is required if effective policy is to be developed and accepted. METHODS: Using the data from a large-scale national opinion survey, this study evaluates public appraisal of past government efforts to legalize organ transplant from brain-dead bodies in Japan, and examines the public's intent to participate in future policy. RESULTS: A relatively large percentage of people became aware of the issue when government actions were initiated, and many increasingly formed their own opinions on the policy in question. However, a significant number (43.3%) remained unaware of any legislative efforts, and only 26.3% of those who were aware provided positive appraisals of the policymaking process. Furthermore, a majority of respondents (61.8%) indicated unwillingness to participate in future policy discussions of bioethical issues. Multivariate analysis revealed the following factors are associated with positive appraisals of policy development: greater age; earlier opinion formation; and familiarity with donor cards. Factors associated with likelihood of future participation in policy discussion include younger age, earlier attention to the issue, and knowledge of past government efforts. Those unwilling to participate cited as their reasons that experts are more knowledgeable and that the issues are too complex. CONCLUSIONS: Results of an opinion survey in Japan were presented, and a set of factors statistically associated with them were discussed. Further efforts to improve policy making process on bioethical issues are desirable
    • 

    corecore