31 research outputs found

    Polycystic ovary syndrome is associated with adverse mental health and neurodevelopmental outcomes

    Get PDF
    Context Polycystic ovary syndrome (PCOS) is characterized by hyperandrogenism and subfertility, but the effects on mental health and child neurodevelopment are unclear. Objectives To determine if (1) there is an association between PCOS and psychiatric outcomes and (2) whether rates of autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD) are higher in children of mothers with PCOS. Design Data were extracted from the Clinical Practice Research Datalink. Patients with PCOS were matched to two control sets (1:1) by age, body mass index, and primary care practice. Control set 2 was additionally matched on prior mental health status. Primary outcomes were the incidence of depression, anxiety, and bipolar disorder. Secondary outcomes were the prevalence of ADHD or ASD in the children. Results Eligible patients (16,986) were identified; 16,938 and 16,355 were matched to control sets 1 and 2, respectively. Compared with control set 1, baseline prevalence was 23.1% vs 19.3% for depression, 11.5% vs 9.3% for anxiety, and 3.2% vs 1.5% for bipolar disorder (P < 0.001). The hazard ratio for time to each endpoint was 1.26 (95% confidence interval 1.19 to 1.32), 1.20 (1.11 to 1.29), and 1.21 (1.03 to 1.42) for set 1 and 1.38 (1.30 to 1.45), 1.39 (1.29 to 1.51), and 1.44 (1.21 to 1.71) for set 2. The odds ratios for ASD and ADHD in children were 1.54 (1.12 to 2.11) and 1.64 (1.16 to 2.33) for set 1 and 1.76 (1.27 to 2.46) and 1.34 (0.96 to 1.89) for set 2. Conclusions PCOS is associated with psychiatric morbidity and increased risk of ADHD and ASD in their children. Screening for mental health disorders should be considered during assessment

    The model of mortality with incident cirrhosis (MoMIC) and the model of long-term outlook of mortality in dcirrhosis (LOMiC)

    Get PDF
    The purpose of this study was to produce two statistical survival models in those with cirrhosis utilising only routine parameters, including non-liver-related clinical factors that influence survival. The first model identified and utilised factors impacting short-term survival to 90-days post incident diagnosis, and a further model characterised factors that impacted survival following this acute phase. Data were from the Clinical Practice Research Datalink linked with Hospital Episode Statistics. Incident cases in patients ≥18 years were identified between 1998 and 2014. Patients that had prior history of cancer or had received liver transplants prior were excluded. Model-1 used a logistic regression model to predict mortality. Model-2 used data from those patients who survived 90 days, and used an extension of the Cox regression model, adjusting for time-dependent covariables. At 90 days, 23% of patients had died. Overall median survival was 3.7 years. Model-1: numerous predictors, prior comorbidities and decompensating events were incorporated. All comorbidities contributed to increased odds of death, with renal disease having the largest adjusted odds ratio (OR = 3.35, 95%CI 2.97–3.77). Model-2: covariables included cumulative admissions for liver disease-related events and admissions for infections. Significant covariates were renal disease (adjusted hazard ratio (HR = 2.89, 2.47–3.38)), elevated bilirubin levels (aHR = 1.38, 1.26–1.51) and low sodium levels (aHR = 2.26, 1.84–2.78). An internal validation demonstrated reliability of both models. In conclusion: two survival models that included parameters commonly recorded in routine clinical practice were generated that reliably forecast the risk of death in patients with cirrhosis: in the acute, post diagnosis phase, and following this critical, 90 day phase. This has implications for practice and helps better forecast the risk of mortality from cirrhosis using routinely recorded parameters without inputs from specialists

    Healthcare resource utilization and related financial costs associated with glucose lowering with either exenatide or basal insulin: a retrospective cohort study

    Get PDF
    Aims Type 2 diabetes is a major health problem placing increasing demands on healthcare systems. Our objective was to estimate healthcare resource use and related financial costs following treatment with exenatide‐based regimens prescribed as once‐weekly (EQW) or twice‐daily (EBID) formulations, compared with regimens based on basal insulin (BI). Materials and methods This retrospective cohort study used data from the UK Clinical Practice Research Datalink (CPRD) linked to Hospital Episode Statistics (HES). Patients with type 2 diabetes who received exenatide or BI between 2009 and 2014 as their first recorded exposure to injectable therapy were selected. Costs were attributed to primary care contacts, diabetes‐related prescriptions and inpatient admissions using standard UK healthcare costing methods (2014 prices). Frequency and costs were compared between cohorts before and after matching by propensity score using Poisson regression. Results Groups of 8723, 218 and 2180 patients receiving BI, EQW and EBID, respectively, were identified; 188 and 1486 patients receiving EQW and EBID, respectively, were matched 1:1 to patients receiving BI by propensity score. Among unmatched cohorts, total crude mean costs per patient‐year were £2765 for EQW, £2549 for EBID and £4080 for BI. Compared with BI, the adjusted annual cost ratio (aACR) was 0.92 (95% CI, 0.91‐0.92) for EQW and 0.82 (95% CI, 0.82‐0.82) for EBID. Corresponding costs for the propensity‐matched subgroups were £2646 vs £3283 (aACR, 0.80, 0.80‐0.81) for EQW vs BI and £2532 vs £3070 (aACR, 0.84, 0.84‐0.84) for EBID vs BI. Conclusion Overall, exenatide once‐weekly and twice‐daily‐based regimens were associated with reduced healthcare resource use and costs compared with basal‐insulin‐based regimens

    Non-Response to Antibiotic Treatment in Adolescents for Four Common Infections in UK Primary Care 1991-2012: A Retrospective, Longitudinal Study

    Get PDF
    We studied non-response rates to antibiotics in the under-reported subgroup of adolescents aged 12 to 17 years old, using standardised criteria representing antibiotic treatment failure. Routine, primary care data from the UK Clinical Practice Research Datalink (CPRD) were used. Annual, non-response rates by antibiotics and by indication were determined. We identified 824,651 monotherapies in 415,468 adolescents: 368,900 (45%) episodes for upper respiratory tract infections (URTIs), 89,558 (11%) for lower respiratory tract infections (LRTIs), 286,969 (35%) for skin/soft tissue infections (SSTIs) and 79,224 (10%) for acute otitis media (AOM). The most frequently prescribed antibiotics were amoxicillin (27%), penicillin-V (24%), erythromycin (11%), flucloxacillin (11%) and oxytetracycline (6%). In 1991, the overall non-response rate was 9.3%: 11.9% for LRTIs, 9.5% for URTIs, 7.1% for SSTIs, 9.7% for AOM. In 2012, the overall non-response rate was 9.2%. Highest non-response rates were for AOM in 1991–1999 and for LRTIs in 2000–2012. Physicians generally prescribed antibiotics to adolescents according to recommendations. Evidence of antibiotic non-response was less common among adolescents during this 22-year study period compared with an all-age population, where the overall non-response rate was 12%

    Risk of cardiovascular events, arrhythmia and all-cause mortality associated with clarithromycin versus alternative antibiotics prescribed for respiratory tract infections: a retrospective cohort study

    Get PDF
    Objective: To determine whether treatment with clarithromycin for respiratory tract infections was associated with an increased risk of cardiovascular (CV) events, arrhythmias or all-cause mortality compared with other antibiotics. Design: Retrospective cohort design comparing clarithromycin monotherapy for lower (LRTI) or upper respiratory tract infection (URTI) with other antibiotic monotherapies for the same indication. Setting: Routine primary care data from the UK Clinical Practice Research Datalink and inpatient data from the Hospital Episode Statistics (HES). Participants: Patients aged ≥35 years prescribed antibiotic monotherapy for LRTI or URTI 1998–2012 and eligible for data linkage to HES. Main outcome measures: The main outcome measures were: adjusted risk of first-ever CV event, within 37 days of initiation, in commonly prescribed antibiotics compared with clarithromycin. Secondarily, adjusted 37-day risks of first-ever arrhythmia and allcause mortality. Results: Of 700 689 treatments for LRTI and eligible for the CV analysis, there were 2071 CV events (unadjusted event rate: 29.6 per 10 000 treatments). Of 691 998 eligible treatments for URTI, there were 688 CV events (9.9 per 10 000 treatments). In LRTI and URTI, there were no significant differences in CV risk between clarithromycin and all other antibiotics combined: OR=1.00 (95% CI 0.82 to 1.22) and 0.82 (0.54 to 1.25), respectively. Adjusted CV risk in LRTI versus clarithromycin ranged from OR=1.42 (cefalexin; 95% CI 1.08 to 1.86) to 0.92 (doxycycline; 0.64 to 1.32); in URTI, from 1.17 (co-amoxiclav; 0.68 to 2.01) to 0.67 (erythromycin; 0.40 to 1.11). Adjusted mortality risk versus clarithromycin in LRTI ranged from 0.42 to 1.32; in URTI, from 0.75 to 1.43. For arrhythmia, adjusted risks in LRTI ranged from 0.68 to 1.05; in URTI, from 0.70 to 1.22. Conclusions: CV events were more likely after LRTI than after URTI. When analysed by specific indication, CV risk associated with clarithromycin was no different to other antibiotics

    Effectiveness of the capsaicin 8% patch in the management of peripheral neuropathic pain in European clinical practice: the ASCEND study

    Get PDF
    Background In randomised studies, the capsaicin 8% patch has demonstrated effective pain relief in patients with peripheral neuropathic pain (PNP) arising from different aetiologies. Methods ASCEND was an open-label, non-interventional study of patients with non-diabetes-related PNP who received capsaicin 8% patch treatment, according to usual clinical practice, and were followed for ≤52 weeks. Co-primary endpoints were percentage change in the mean numeric pain rating scale (NPRS) ‘average daily pain’ score from baseline to the average of Weeks 2 and 8 following first treatment; and median time from first to second treatment. The primary analysis was intended to assess analgesic equivalence between post-herpetic neuralgia (PHN) and other PNP aetiologies. Health-related quality of life (HRQoL, using EQ-5D), Patient Global Impression of Change (PGIC) and tolerability were also assessed. Results Following first application, patients experienced a 26.6% (95% CI: 23.6, 29.62; n = 412) reduction in mean NPRS score from baseline to Weeks 2 and 8. Equivalence was demonstrated between PHN and the neuropathic back pain, post-operative and post-traumatic neuropathic pain and ‘other’ PNP aetiology subgroups. The median time from first to second treatment was 191 days (95% CI: 147, 235; n = 181). Forty-four percent of all patients were responders (≥30% reduction in NPRS score from baseline to Weeks 2 and 8) following first treatment, and 86.9% (n = 159/183) remained so at Week 12. A sustained pain response was observed until Week 52, with a 37.0% (95% CI: 31.3, 42.7; n = 176) reduction in mean NPRS score from baseline. Patients with the shortest duration of pain (0–0.72 years) experienced the highest pain response from baseline to Weeks 2 and 8. Mean EQ-5D index score improved by 0.199 utils (responders: 0.292 utils) from baseline to Week 2 and was maintained until Week 52. Most patients reported improvements in PGIC at Week 2 and at all follow-up assessments regardless of number of treatments received. Adverse events were primarily mild or moderate reversible application site reactions. Conclusion In European clinical practice, the capsaicin 8% patch provided effective and sustained pain relief, substantially improved HRQoL, improved overall health status and was generally well tolerated in a heterogeneous PNP population

    Evaluation of the cost-effectiveness of rifaximin-α for the management of patients with hepatic encephalopathy in the United Kingdom

    Get PDF
    Objective: Rifaximin-α 550 mg twice daily plus lactulose has demonstrated efficacy in reducing recurrence of episodes of overt HE (OHE) and the risk of HE-related hospitalisations compared with lactulose alone. This analysis estimated the cost effectiveness of rifaximin-α 550 mg twice daily plus lactulose versus lactulose alone in UK cirrhotic patients with OHE. Method: A Markov model was built to estimate the incremental cost effectiveness ratio (ICER). The perspective was that of the UK National Health Service (NHS). Clinical data were sourced from a randomised controlled trial (RCT) and an open-label maintenance (OLM) study in cirrhotic patients in remission from recurrent episodes of OHE. Health-related utility was estimated indirectly from disease-specific quality of life RCT data. Resource use data describing the impact of rifaximin-α on hospital admissions and length of stay for cirrhotic patients with OHE were from four single-centre UK audits. Costs (2012) were derived from published sources; costs and benefits were discounted at 3.5%. The base-case time horizon was five years. Results: The average cost per patient was £22,971 in the rifaximin-α plus lactulose arm and £23,545 in the lactulose arm, a saving of £573. The corresponding values for benefit were 2.35 QALYs and 1.83 QALYs per person, a difference of 0.52 QALYs. This translated into a dominant base-case ICER. Key parameters that impacted the ICER included number of hospital admissions and length of stay. Conclusion: Rifaximin-α 550 mg twice daily in patients with recurrent episodes of overt HE was estimated to generate cost savings and improved clinical outcomes compared to standard care over five years

    Characterization and Associated Costs of Constipation Relating to Exposure to Strong Opioids in England: An Observational Study

    Get PDF
    PurposeOpioid use is associated with gastrointestinal adverse events, including nausea and constipation. We used a real-world dataset to characterize the health care burden associated with opioid-induced constipation (OIC) with particular emphasis on strong opioids.MethodsThis retrospective cohort study was conducted using the Clinical Practice Research Datalink, a large UK primary care dataset linked to hospital data. Patients prescribed opioids during 2016 were selected and episodes of opioid therapy constructed. Episodes with ≥84 days of exposure were classified as chronic, with date of first prescription as the index date. The main analysis focused on patients prescribed strong opioids who were laxative naive. Constipation was defined by ≥2 laxative prescriptions during the opioid episode. Patients for whom initial laxative therapy escalated by switch, augmentation, or dose were defined as OIC unstable, and the first 3 lines of OIC escalation were classified. Health care costs accrued in the first 12 months of the opioid episode were aggregated and compared.FindingsA total of 27,629 opioid episodes were identified; 5916 (21.4%) involved a strong opioid for patients who were previously laxative naive. Of these patients, 2886 (48.8%) were defined as the OIC population; 941 (33.26%) were classified as stable. Of the 1945 (67.4%) episodes classified as unstable, 849 (43.7%), 360 (18.5%), and 736 (37.8%) had 1, 2, and ≥3 changes of laxative prescription, respectively. Patients without OIC had lower costs per patient year (£3822 [US5160/4242])comparedwithOIC(£4786[US5160/€4242]) compared with OIC (£4786 [US6461/€5312]). Costs increased as patients had multiple changes in therapy: £4696 (US6340/5213),£4749(US6340/€5213), £4749 (US6411/€5271), and £4981 (US6724/5529)for1,2,and3changes,respectively.TheadjustedcostratiorelativetononOICwas1.14(956724/€5529) for 1, 2, and ≥3 changes, respectively. The adjusted cost ratio relative to non-OIC was 1.14 (95% CI, 1.09–1.32) for those classified as stable and 1.19 (95% CI, 1.09–1.32) for those with ≥3 laxative changes. Similar patterns were observed for patients taking anyopioid, with costs increased for those classified as having OIC (£3727 [US5031/€4137] vs £2379 [US3212/2641),andforthosepatientsclassifiedasunstableversusstable(£3931[US3212 /€2641),and for those patients classified as unstable versus stable (£3931 [US5307/€4363] vs £3432 [US4633/3810).Costsincreasedwitheachadditionallineoftherapyfrom£3701(US4633/€3810). Costs increased with each additional line of therapy from £3701 (US4996/€4108), £3916 (US5287/4347),and£4318(US5287/€4347), and £4318 (US5829/€4793).ImplicationsOIC was a common adverse event of opioid treatment and was poorly controlled for a large number of patients. Poor control was associated with increased health care costs. The impact of OIC should be considered when prescribing opioids. These results should be interpreted with consideration of the caveats associated with the analysis of routine data

    Hospital admissions for severe infections in people with chronic kidney disease in relation to renal disease severity and diabetes status

    Get PDF
    Background: Immunosuppressive agents are being investigated for the treatment of chronic kidney disease (CKD) but may increase risk of infection. This was a retrospective observational study intended to evaluate the risk of hospitalized infection in patients with CKD, by estimated glomerular filtration rate (eGFR) and proteinuria status, aiming to identify the most appropriate disease stage for immunosuppressive intervention. Methods: Routine UK primary-care and linked secondary-care data were extracted from the Clinical Practice Research Datalink. Patients with a record of CKD were identified and grouped into type 2, type 1 and nondiabetes cohorts. Time-dependent, Cox proportional hazard models were used to determine the likelihood of hospitalized infection. Results: We identified 97 839 patients with a record of CKD, of these 11 719 (12%) had type 2 diabetes. In these latter patients, the adjusted hazard ratios (aHR) were 1.00 (95% CI: 0.80-1.25), 1.00, 1.03 (95% CI: 0.92-1.15), 1.36 (95% CI: 0.20-1.54), 1.82 (95% CI: 1.54-2.15) and 2.41 (95% CI: 1.60-3.63) at eGFR stages G1, G2 (reference), G3a, G3b, G4 and G5, respectively; and 1.00, 1.45 (95% CI: 1.29-1.63) and 1.91 (95% CI: 1.67-2.20) at proteinuria stages A1 (reference), A2 and A3, respectively. All aHRs (except G1 and G3a) were significant, with similar patterns observed within the non-DM and overall cohorts. Conclusions: eGFR and degree of albuminuria were independent markers of hospitalized infection in both patients with and without diabetes. The same patterns of hazard ratios of eGFR and proteinuria were seen in CKD patients regardless of diabetes status, with the risk of each outcome increasing with a decreasing eGFR and increasing proteinuria. Infection risk increased significantly from eGFR stage G3b and proteinuria stage A2 in type 2 diabetes. Treating type 2 DM patients with CKD at eGFR stages G1-G3a with immunosuppressive therapy may therefore provide a favourable risk-benefit ratio (G1-G3a in type 2 diabetes; G1-G2 in nondiabetes and overall cohorts) although the degree of proteinuria needs to be considered

    Hospital admissions for severe infections in people with chronic kidney disease in relation to renal disease severity and diabetes status

    Get PDF
    Background: Immunosuppressive agents are being investigated for the treatment of chronic kidney disease (CKD) but may increase risk of infection. This was a retrospective observational study intended to evaluate the risk of hospitalized infection in patients with CKD, by estimated glomerular filtration rate (eGFR) and proteinuria status, aiming to identify the most appropriate disease stage for immunosuppressive intervention. Methods: Routine UK primary-care and linked secondary-care data were extracted from the Clinical Practice Research Datalink. Patients with a record of CKD were identified and grouped into type 2, type 1 and nondiabetes cohorts. Time-dependent, Cox proportional hazard models were used to determine the likelihood of hospitalized infection. Results: We identified 97 839 patients with a record of CKD, of these 11 719 (12%) had type 2 diabetes. In these latter patients, the adjusted hazard ratios (aHR) were 1.00 (95% CI: 0.80-1.25), 1.00, 1.03 (95% CI: 0.92-1.15), 1.36 (95% CI: 0.20-1.54), 1.82 (95% CI: 1.54-2.15) and 2.41 (95% CI: 1.60-3.63) at eGFR stages G1, G2 (reference), G3a, G3b, G4 and G5, respectively; and 1.00, 1.45 (95% CI: 1.29-1.63) and 1.91 (95% CI: 1.67-2.20) at proteinuria stages A1 (reference), A2 and A3, respectively. All aHRs (except G1 and G3a) were significant, with similar patterns observed within the non-DM and overall cohorts. Conclusions: eGFR and degree of albuminuria were independent markers of hospitalized infection in both patients with and without diabetes. The same patterns of hazard ratios of eGFR and proteinuria were seen in CKD patients regardless of diabetes status, with the risk of each outcome increasing with a decreasing eGFR and increasing proteinuria. Infection risk increased significantly from eGFR stage G3b and proteinuria stage A2 in type 2 diabetes. Treating type 2 DM patients with CKD at eGFR stages G1-G3a with immunosuppressive therapy may therefore provide a favourable risk-benefit ratio (G1-G3a in type 2 diabetes; G1-G2 in nondiabetes and overall cohorts) although the degree of proteinuria needs to be considered
    corecore