4 research outputs found

    Increasing frailty is associated with higher prevalence and reduced recognition of delirium in older hospitalised inpatients: results of a multi-centre study

    Get PDF
    Purpose: Delirium is a neuropsychiatric disorder delineated by an acute change in cognition, attention, and consciousness. It is common, particularly in older adults, but poorly recognised. Frailty is the accumulation of deficits conferring an increased risk of adverse outcomes. We set out to determine how severity of frailty, as measured using the CFS, affected delirium rates, and recognition in hospitalised older people in the United Kingdom. Methods: Adults over 65 years were included in an observational multi-centre audit across UK hospitals, two prospective rounds, and one retrospective note review. Clinical Frailty Scale (CFS), delirium status, and 30-day outcomes were recorded. Results: The overall prevalence of delirium was 16.3% (483). Patients with delirium were more frail than patients without delirium (median CFS 6 vs 4). The risk of delirium was greater with increasing frailty [OR 2.9 (1.8–4.6) in CFS 4 vs 1–3; OR 12.4 (6.2–24.5) in CFS 8 vs 1–3]. Higher CFS was associated with reduced recognition of delirium (OR of 0.7 (0.3–1.9) in CFS 4 compared to 0.2 (0.1–0.7) in CFS 8). These risks were both independent of age and dementia. Conclusion: We have demonstrated an incremental increase in risk of delirium with increasing frailty. This has important clinical implications, suggesting that frailty may provide a more nuanced measure of vulnerability to delirium and poor outcomes. However, the most frail patients are least likely to have their delirium diagnosed and there is a significant lack of research into the underlying pathophysiology of both of these common geriatric syndromes

    Center-Related Bias in MELD Scores Within a Liver Transplant UNOS Region: A Call for Standardization

    No full text
    BACKGROUND: MELD score-based liver transplant allocation was implemented as a fair and objective measure to prioritize patients based upon disease severity. Accuracy and reproducibility of MELD is an essential assumption to ensure fairness in organ access. We hypothesized that variability or bias in laboratory methodology between centers could alter allocation scores for individuals on the transplant waiting list. METHODS: Aliquots of 30 patient serum samples were analyzed for creatinine, bilirubin, and sodium in all transplant centers within United Network for Organ Sharing (UNOS) region 9. Descriptive statistics, intraclass correlation coefficients (ICC), and linear mixed effects regression were used to determine the relationship between center, bilirubin and calculated MELD-Na. RESULTS: The mean MELD-Na score per sample ranged from 14 to 38. The mean range in MELD-Na per sample was 3 points, but 30% of samples had a range of 4-6 points. Correlation plots and intraclass correlation coefficient analysis confirmed bilirubin interfered with creatinine, with worsening agreement in creatinine at high bilirubin levels. Center and bilirubin were independently associated with creatinine reported in mixed effects models. Unbiased hierarchical clustering suggested samples from specific centers have consistently higher creatinine and MELD-Na values. CONCLUSIONS: Despite implementation of creatinine standardization, centers within one UNOS region report clinically significant differences in MELD-Na on an identical sample, with differences of up to 6 points in high MELD-Na patients. The bias in MELD-Na scores based upon center choice within a region should be addressed in the current efforts to eliminate disparities in liver transplant access

    Outcomes of liver transplant recipients with hepatitis C and human immunodeficiency virus coinfection

    No full text
    Hepatitis C virus (HCV) is a controversial indication for liver transplantation (LT) in human immunodeficiency virus (HIV)-infected patients because of reportedly poor outcomes. This prospective, multicenter US cohort study compared patient and graft survival for 89 HCV/HIV-coinfected patients and 2 control groups: 235 HCV-monoinfected LT controls and all US transplant recipients who were 65 years old or older. The 3-year patient and graft survival rates were 60% [95% confidence interval (CI) = 47%-71%] and 53% (95% CI = 40%-64%) for the HCV/HIV patients and 79% (95% CI = 72%-84%) and 74% (95% CI = 66%-79%) for the HCV-infected recipients (P \u3c 0.001 for both), and HIV infection was the only factor significantly associated with reduced patient and graft survival. Among the HCV/HIV patients, older donor age [hazard ratio (HR) = 1.3 per decade], combined kidney-liver transplantation (HR = 3.8), an anti-HCV-positive donor (HR = 2.5), and a body mass index \u3c 21 kg/m 2 (HR = 3.2) were independent predictors of graft loss. For the patients without the last 3 factors, the patient and graft survival rates were similar to those for US LT recipients. The 3-year incidence of treated acute rejection was 1.6-fold higher for the HCV/HIV patients versus the HCV patients (39% versus 24%, log rank P = 0.02), but the cumulative rates of severe HCV disease at 3 years were not significantly different (29% versus 23%, P = 0.21). In conclusion, patient and graft survival rates are lower for HCV/HIV-coinfected LT patients versus HCV-monoinfected LT patients. Importantly, the rates of treated acute rejection (but not the rates of HCV disease severity) are significantly higher for HCV/HIV-coinfected recipients versus HCV-infected recipients. Our results indicate that HCV per se is not a contraindication to LT in HIV patients, but recipient and donor selection and the management of acute rejection strongly influence outcomes. © 2012 American Association for the Study of Liver Diseases
    corecore