16 research outputs found

    The model of mortality with incident cirrhosis (MoMIC) and the model of long-term outlook of mortality in dcirrhosis (LOMiC)

    Get PDF
    The purpose of this study was to produce two statistical survival models in those with cirrhosis utilising only routine parameters, including non-liver-related clinical factors that influence survival. The first model identified and utilised factors impacting short-term survival to 90-days post incident diagnosis, and a further model characterised factors that impacted survival following this acute phase. Data were from the Clinical Practice Research Datalink linked with Hospital Episode Statistics. Incident cases in patients ≥18 years were identified between 1998 and 2014. Patients that had prior history of cancer or had received liver transplants prior were excluded. Model-1 used a logistic regression model to predict mortality. Model-2 used data from those patients who survived 90 days, and used an extension of the Cox regression model, adjusting for time-dependent covariables. At 90 days, 23% of patients had died. Overall median survival was 3.7 years. Model-1: numerous predictors, prior comorbidities and decompensating events were incorporated. All comorbidities contributed to increased odds of death, with renal disease having the largest adjusted odds ratio (OR = 3.35, 95%CI 2.97–3.77). Model-2: covariables included cumulative admissions for liver disease-related events and admissions for infections. Significant covariates were renal disease (adjusted hazard ratio (HR = 2.89, 2.47–3.38)), elevated bilirubin levels (aHR = 1.38, 1.26–1.51) and low sodium levels (aHR = 2.26, 1.84–2.78). An internal validation demonstrated reliability of both models. In conclusion: two survival models that included parameters commonly recorded in routine clinical practice were generated that reliably forecast the risk of death in patients with cirrhosis: in the acute, post diagnosis phase, and following this critical, 90 day phase. This has implications for practice and helps better forecast the risk of mortality from cirrhosis using routinely recorded parameters without inputs from specialists

    Healthcare resource utilization and related financial costs associated with glucose lowering with either exenatide or basal insulin: a retrospective cohort study

    Get PDF
    Aims Type 2 diabetes is a major health problem placing increasing demands on healthcare systems. Our objective was to estimate healthcare resource use and related financial costs following treatment with exenatide‐based regimens prescribed as once‐weekly (EQW) or twice‐daily (EBID) formulations, compared with regimens based on basal insulin (BI). Materials and methods This retrospective cohort study used data from the UK Clinical Practice Research Datalink (CPRD) linked to Hospital Episode Statistics (HES). Patients with type 2 diabetes who received exenatide or BI between 2009 and 2014 as their first recorded exposure to injectable therapy were selected. Costs were attributed to primary care contacts, diabetes‐related prescriptions and inpatient admissions using standard UK healthcare costing methods (2014 prices). Frequency and costs were compared between cohorts before and after matching by propensity score using Poisson regression. Results Groups of 8723, 218 and 2180 patients receiving BI, EQW and EBID, respectively, were identified; 188 and 1486 patients receiving EQW and EBID, respectively, were matched 1:1 to patients receiving BI by propensity score. Among unmatched cohorts, total crude mean costs per patient‐year were £2765 for EQW, £2549 for EBID and £4080 for BI. Compared with BI, the adjusted annual cost ratio (aACR) was 0.92 (95% CI, 0.91‐0.92) for EQW and 0.82 (95% CI, 0.82‐0.82) for EBID. Corresponding costs for the propensity‐matched subgroups were £2646 vs £3283 (aACR, 0.80, 0.80‐0.81) for EQW vs BI and £2532 vs £3070 (aACR, 0.84, 0.84‐0.84) for EBID vs BI. Conclusion Overall, exenatide once‐weekly and twice‐daily‐based regimens were associated with reduced healthcare resource use and costs compared with basal‐insulin‐based regimens

    Real-world evidence from the first online healthcare analytics platform—Livingstone. Validation of its descriptive epidemiology module

    Get PDF
    Incidence and prevalence are key epidemiological determinants characterizing the quantum of a disease. We compared incidence and prevalence estimates derived automatically from the first ever online, essentially real-time, healthcare analytics platform—Livingstone—against findings from comparable peer-reviewed studies in order to validate the descriptive epidemiology module. The source of routine NHS data for Livingstone was the Clinical Practice Research Datalink (CPRD). After applying a general search strategy looking for any disease or condition, 76 relevant studies were first retrieved, of which 10 met pre-specified inclusion and exclusion criteria. Findings reported in these studies were compared with estimates produced automatically by Livingstone. The published reports described elements of the epidemiology of 14 diseases or conditions. Lin’s concordance correlation coefficient (CCC) was used to evaluate the concordance between findings from Livingstone and those detailed in the published studies. The concordance of incidence values in the final year reported by each study versus Livingstone was 0.96 (95% CI: 0.89–0.98), whilst for all annual incidence values the concordance was 0.93 (0.91–0.94). For prevalence, concordance for the final annual prevalence reported in each study versus Livingstone was 1.00 (0.99–1.00) and for all reported annual prevalence values, the concordance was 0.93 (0.90–0.95). The concordance between Livingstone and the latest published findings was near perfect for prevalence and substantial for incidence. For the first time, it is now possible to automatically generate reliable descriptive epidemiology from routine health records, and in near-real time. Livingstone provides the first mechanism to rapidly generate standardised, descriptive epidemiology for all clinical events from real world data

    Evaluation of the healthcare resource use and the related financial costs of managing peanut allergy in the United Kingdom

    No full text
    Aims: We aimed to estimate the resource use and associated costs for patients with peanut allergy (PA) compared to matched controls. Methods: This was a retrospective cohort study using data from the UK Clinical Practice Research Datalink and Hospital Episode Statistics. PA patients were matched to two control cohorts: the first (simple-matched) were matched 1:1 on year of birth, general practice, gender and registration year. The second (atopy-matched) were matched on the same characteristics plus presence/absence of an atopic condition. Prescriptions and primary and secondary care contacts were compared between cases and controls. Results: 15,483 peanut-allergic patients were identified: 13,609 (87.9%) were simple-matched and 9,320 (60.2%) atopy-matched. The total per person annual incremental health-care costs associated with PA were £253 (atopy-matched) and £333 (simple-matched). For those with PA and a prior anaphylaxis incremental costs were £662, for those prescribed an epinephrine autoinjector incremental costs were £392. Extrapolated to the U.K. population, total excess costs of PA were between £33 and 44 million in 2015. Conclusions: Patients with PA had increased health-care contacts and consequently increased associated costs compared to controls. Observation bias should be considered in interpretation, but this study suggests that PA presents significant burden to health-care systems

    Major adverse cardiovascular events in people with chronic kidney disease in relation to disease severity and diabetes status.

    Get PDF
    Diabetes plays an important role in the complex relationship between chronic kidney disease (CKD) and cardiovascular disease. This retrospective observational study compared the influence of estimated glomerular filtration rate (eGFR) and proteinuria on the risk of major adverse cardiovascular event (MACE; myocardial infarction or stroke) in CKD patients with and without diabetes. Data were from a linked database of UK electronic health records. Individuals with CKD and no prior MACE were classified as type 1 diabetes (T1DM; n = 164), type 2 diabetes (T2DM; n = 9,711), and non-diabetes (non-DM; n = 75,789). Monthly updated time-dependent Cox proportional hazard models were constructed to calculate adjusted hazard ratios (aHRs) for progression to MACE from first record of abnormal eGFR or proteinuria (index date). In non-DM, aHRs (95% CIs) by baseline eGFR category (referent G2) were G1: 0.70 (0.55-0.90), G3a: 1.28 (1.20-1.35), G3b: 1.64 (1.52-1.76), G4: 2.19 (1.98-2.43), and G5: 3.12 (2.44-3.99), and by proteinuria category (referent A1) were A2: 1.13 (1.00-1.28), A2/3 (severity indeterminable): 1.58 (1.28-1.95), and A3: 1.64 (1.38-1.95). In T2DM, aHRs were G1: 0.98 (0.72-1.32), G3a: 1.18 (1.03-1.34), G3b: 1.31 (1.12-1.54), G4: 1.87 (1.53-2.29), G5: 2.87 (1.82-4.52), A2: 1.22 (1.04-1.42), A2/3: 1.45 (1.17-1.79), and A3: 1.82 (1.53-2.16). Low numbers in T1DM precluded analysis. Modelling T2DM and non-DM together, aHRs were, respectively, G1: 3.23 (2.38-4.40) and 0.70 (0.55-0.89); G2: 3.18 (2.73-3.70) and 1.00 (referent); G3a: 3.65 (3.13-4.25) and 1.28 (1.21-1.36); G3b: 4.01 (3.40-4.74) and 1.65 (1.54-1.77); G4: 5.78 (4.70-7.10) and 2.21 (2.00-2.45); G5: 9.00 (5.71-14.18) and 3.14 (2.46-4.00). In conclusion, reduced eGFR and proteinuria were independently associated with increased risk of MACE regardless of diabetes status. However, the risk of MACE in the same eGFR state was 4.6-2.4 times higher in T2DM than in non-DM

    Evaluation of the clinical effectiveness of fluocinolone acetonide 190 µg intravitreal implant in diabetic macular edema: a comparison between study and fellow eyes

    Get PDF
    <p><b>Objectives:</b> To compare visual and anatomical outcomes between eyes treated with fluocinolone acetonide (FAc) 190 µg intravitreal implant for clinically significant chronic diabetic macular edema (DME) and fellow eyes not treated with FAc implant using data from the Iluvien Clinical Evidence study in the UK (ICE-UK) study.</p> <p><b>Methods:</b> In this retrospective cohort study, data on people attending hospital eye services and treated with the FAc implant between April 1, 2013 and April 15, 2015 were collected. Changes in visual acuity (VA), central foveal thickness (CFT) and intraocular pressure (IOP) were compared between study eyes (intervention) and fellow eyes.</p> <p><b>Results:</b> A total of 208 people were selected. Mean age was 68.1 years and 62% were male. Mean change in VA was −0.09 LogMAR units for study eyes and 0.04 LogMAR units for fellow eyes at 12 months post-implant (<i>p</i> < .001). Over the same period, ≥5 letter, ≥10 letter and ≥15 letter improvements in Early Treatment Diabetic Retinopathy Study (ETDRS) score were achieved by more FAc treated eyes than by fellow eyes (41% versus 23%, <i>p</i> < .001; 28% versus 11%, <i>p</i> < .001; and 18% versus 4%, <i>p</i> < .001 at 12 months, respectively). Differences in the mean change in CFT (−113 µm versus −13 µm, <i>p</i> < .001) and IOP (3.2 mmHg versus −0.2 mmHg, <i>p</i> < .001) were also observed between study and fellow eyes at 12 months.</p> <p><b>Conclusion:</b> Visual acuity improved in study eyes over the 12 months following FAc implant and worsened in fellow eyes. Over the same period, study eyes showed a larger improvement in central foveal thickness. Intraocular pressure worsened in study eyes only. Change in visual acuity, central foveal thickness and intraocular pressure between FAc implant and the end of the 12-month follow-up period differed significantly between study and fellow eyes. </p
    corecore