15 research outputs found

    Increasing frailty is associated with higher prevalence and reduced recognition of delirium in older hospitalised inpatients: results of a multi-centre study

    Get PDF
    Purpose: Delirium is a neuropsychiatric disorder delineated by an acute change in cognition, attention, and consciousness. It is common, particularly in older adults, but poorly recognised. Frailty is the accumulation of deficits conferring an increased risk of adverse outcomes. We set out to determine how severity of frailty, as measured using the CFS, affected delirium rates, and recognition in hospitalised older people in the United Kingdom. Methods: Adults over 65 years were included in an observational multi-centre audit across UK hospitals, two prospective rounds, and one retrospective note review. Clinical Frailty Scale (CFS), delirium status, and 30-day outcomes were recorded. Results: The overall prevalence of delirium was 16.3% (483). Patients with delirium were more frail than patients without delirium (median CFS 6 vs 4). The risk of delirium was greater with increasing frailty [OR 2.9 (1.8–4.6) in CFS 4 vs 1–3; OR 12.4 (6.2–24.5) in CFS 8 vs 1–3]. Higher CFS was associated with reduced recognition of delirium (OR of 0.7 (0.3–1.9) in CFS 4 compared to 0.2 (0.1–0.7) in CFS 8). These risks were both independent of age and dementia. Conclusion: We have demonstrated an incremental increase in risk of delirium with increasing frailty. This has important clinical implications, suggesting that frailty may provide a more nuanced measure of vulnerability to delirium and poor outcomes. However, the most frail patients are least likely to have their delirium diagnosed and there is a significant lack of research into the underlying pathophysiology of both of these common geriatric syndromes

    MELD Score Is an Important Predictor of Pretransplantation Mortality in HIV-Infected Liver Transplant Candidates

    No full text
    Human immunodeficiency virus (HIV) infection accelerates liver disease progression in patients with hepatitis C virus (HCV) and could shorten survival of those awaiting liver transplants. The Model for End-Stage Liver Disease (MELD) score predicts mortality in HIV-negative transplant candidates, but its reliability has not been established in HIV-positive candidates. We evaluated predictors of pretransplantation mortality in HIV-positive liver transplant candidates enrolled in the Solid Organ Transplantation in HIV: Multi-Site Study (HIVTR) matched 1:5 by age, sex, race, and HCV infection with HIV-negative controls from the United Network for Organ Sharing. Of 167 HIVTR candidates, 24 died (14.4%); this mortality rate was similar to that of controls (88/792, 11.1%, P = .30) with no significant difference in causes of mortality. A significantly lower proportion of HIVTR candidates (34.7%) underwent liver transplantation, compared with controls (47.6%, P = .003). In the combined cohort, baseline MELD score predicted pretransplantation mortality (hazard ratio [HR], 1.27; P < .0001), whereas HIV infection did not (HR, 1.69; P = .20). After controlling for pretransplantation CD4 + cell count and HIV RNA levels, the only significant predictor of mortality in the HIV-infected subjects was pretransplantation MELD score (HR, 1.2; P < .0001). Pretransplantation mortality characteristics are similar between HIV-positive and HIV-negative candidates. Although lower CD4 + cell counts and detectable levels of HIV RNA might be associated with a higher rate of pretransplantation mortality, baseline MELD score was the only significant independent predictor of pretransplantation mortality in HIV-infected liver transplant candidates

    Center-Related Bias in MELD Scores Within a Liver Transplant UNOS Region: A Call for Standardization

    No full text
    BACKGROUND: MELD score-based liver transplant allocation was implemented as a fair and objective measure to prioritize patients based upon disease severity. Accuracy and reproducibility of MELD is an essential assumption to ensure fairness in organ access. We hypothesized that variability or bias in laboratory methodology between centers could alter allocation scores for individuals on the transplant waiting list. METHODS: Aliquots of 30 patient serum samples were analyzed for creatinine, bilirubin, and sodium in all transplant centers within United Network for Organ Sharing (UNOS) region 9. Descriptive statistics, intraclass correlation coefficients (ICC), and linear mixed effects regression were used to determine the relationship between center, bilirubin and calculated MELD-Na. RESULTS: The mean MELD-Na score per sample ranged from 14 to 38. The mean range in MELD-Na per sample was 3 points, but 30% of samples had a range of 4-6 points. Correlation plots and intraclass correlation coefficient analysis confirmed bilirubin interfered with creatinine, with worsening agreement in creatinine at high bilirubin levels. Center and bilirubin were independently associated with creatinine reported in mixed effects models. Unbiased hierarchical clustering suggested samples from specific centers have consistently higher creatinine and MELD-Na values. CONCLUSIONS: Despite implementation of creatinine standardization, centers within one UNOS region report clinically significant differences in MELD-Na on an identical sample, with differences of up to 6 points in high MELD-Na patients. The bias in MELD-Na scores based upon center choice within a region should be addressed in the current efforts to eliminate disparities in liver transplant access

    Phase I Trial of Sorafenib Following Liver Transplantation in Patients with High-Risk Hepatocellular Carcinoma

    No full text
    Liver transplantation offers excellent long-term survival for hepatocellular carcinoma (HCC) patients who fall within established criteria. For those outside such criteria, or with high-risk pathologic features in the explant, HCC recurrence rates are higher. We conducted a multicenter phase I trial of sorafenib in liver transplantation patients with high-risk HCC. Subjects had HCC outside the Milan criteria (pre- or post-transplant), poorly differentiated tumors, or vascular invasion. We used a standard 3+3 phase I design with a planned duration of treatment of 24 weeks. Correlative studies included the number of circulating endothelial cells (CECs), plasma biomarkers, and tumor expression of p-Erk, p-Akt, and c-Met in tissue micro-arrays. We enrolled 14 patients with a median age of 63 years. Of these, 93% were men and 71% had underlying hepatitis C virus (HCV) and 21% had HBV. The maximum tolerated dose of sorafenib was 200 mg BID. Grade 3-4 toxicities seen in >10% of subjects included leukopenia (21%), elevated gamma-glutamyl transferase (21%), hypertension (14%), hand-foot syndrome (14%) and diarrhea (14%). Over a median follow-up of 953 days, one patient died and four recurred. The mean CEC number at baseline was 21 cells/4 ml for those who recurred, and 80 cells/4 ml for those who did not (p=0.10). Mean soluble vascular endothelial growth factor receptor-2 levels decreased after 1 month on sorafenib (p=0.09), but did not correlate with recurrence. There was a trend for tumor c-Met expression to correlate with increased risk of recurrence. Post-transplant sorafenib was found to be feasible and tolerable at 200 mg PO BID. The effect of post-transplant sorafenib on recurrence-free survival is potentially promising but needs further validation in a larger study

    Outcomes of liver transplant recipients with hepatitis C and human immunodeficiency virus coinfection

    No full text
    Hepatitis C virus (HCV) is a controversial indication for liver transplantation (LT) in human immunodeficiency virus (HIV)-infected patients because of reportedly poor outcomes. This prospective, multicenter US cohort study compared patient and graft survival for 89 HCV/HIV-coinfected patients and 2 control groups: 235 HCV-monoinfected LT controls and all US transplant recipients who were 65 years old or older. The 3-year patient and graft survival rates were 60% [95% confidence interval (CI) = 47%-71%] and 53% (95% CI = 40%-64%) for the HCV/HIV patients and 79% (95% CI = 72%-84%) and 74% (95% CI = 66%-79%) for the HCV-infected recipients (P \u3c 0.001 for both), and HIV infection was the only factor significantly associated with reduced patient and graft survival. Among the HCV/HIV patients, older donor age [hazard ratio (HR) = 1.3 per decade], combined kidney-liver transplantation (HR = 3.8), an anti-HCV-positive donor (HR = 2.5), and a body mass index \u3c 21 kg/m 2 (HR = 3.2) were independent predictors of graft loss. For the patients without the last 3 factors, the patient and graft survival rates were similar to those for US LT recipients. The 3-year incidence of treated acute rejection was 1.6-fold higher for the HCV/HIV patients versus the HCV patients (39% versus 24%, log rank P = 0.02), but the cumulative rates of severe HCV disease at 3 years were not significantly different (29% versus 23%, P = 0.21). In conclusion, patient and graft survival rates are lower for HCV/HIV-coinfected LT patients versus HCV-monoinfected LT patients. Importantly, the rates of treated acute rejection (but not the rates of HCV disease severity) are significantly higher for HCV/HIV-coinfected recipients versus HCV-infected recipients. Our results indicate that HCV per se is not a contraindication to LT in HIV patients, but recipient and donor selection and the management of acute rejection strongly influence outcomes. © 2012 American Association for the Study of Liver Diseases
    corecore