396 research outputs found

    Curator – a data curation tool for clinical real-world evidence

    Get PDF
    Objective This research aims to establish an efficient, systematic, reproducible, and transparent solution for advanced curation of real-world data, which are highly complex and represent an invaluable source of information for academia and industry. Materials and methods We propose a novel software solution that splits the statistical analytical pipeline into two phases. The first phase is implemented through Curator, which performs data engineering and data modelling on deidentified real-world data to achieve advanced curation and provides selected information ready to be analyzed in the second phase by statistical packages. Curator is made of a suite of Python programs and uses MySQL as its database management system. Curator has been utilised with several UK primary and secondary care data sources. Results Curator has been used in 25 completed clinical and health economics research studies. Their output has been published in 2 NIHR-funded reports and 33 prestigious international peer-reviewed journals and presented at 38 global conferences. Curator has consistently reduced research time and costs by over 36% and made research more reproducible and transparent. Discussion Curator fits in well with recent UK governmental guidelines that recognise health data curation as a complex standalone technical challenge. Curator has been used extensively on UK real-world data and can handle several linked datasets. However, for Curator to be accessed by a wider audience, it needs to become more user-friendly. Conclusion Curator has proven to be a cost-effective and trustworthy data curation tool, which should be developed further and made available to third parties

    Machine learning for risk factor identification and cardiovascular mortality prediction among patients with osteoporosis

    Get PDF
    Risk prediction tools are increasingly popular aids in clinical decision-making. However, the underlying models are often trained on data from general patient cohorts and may not be representative of and suitable for use with targeted patient groups in actual clinical practice, such as in the case of osteoporosis patients who may be at elevated risk of mortality. We developed and internally validated a cardiovascular mortality risk prediction model tailored to individuals with osteoporosis using a range of machine learning models. We compared the performance of machine learning models with existing expert-based models with respect to data-driven risk factor identification, discrimination, and calibration. The proposed models were found to outperform existing cardiovascular mortality risk prediction tools for the osteoporosis population. External validation of the model is recommended

    Applying Trial-Derived Treatment Effects to Real-World Populations:Generalizing Cost-Effectiveness Estimates When Modeling Complex Hazards

    Get PDF
    Objectives: Generalizability of trial-based cost-effectiveness estimates to real-world target populations is important for decision making. In the context of independent aggregate time-to-event baseline and relative effects data, complex hazards can make modeling of data for use in economic evaluation challenging. Our article provides an overview of methods that can be used to apply trial-derived relative treatment effects to external real-world baselines when faced with complex hazards and follows with a motivating example. Methods: Approaches for applying trial-derived relative effects to real-world baselines are presented in the context of complex hazards. Appropriate methods are applied in a cost-effectiveness analysis using data from a previously published study assessing the real-world cost-effectiveness of a treatment for carcinoma of the head and neck as a motivating example. Results: Lack of common hazards between the trial and target real-world population, a complex baseline hazard function, and nonproportional relative effects made the use of flexible models necessary to adequately estimate survival. Assuming common distributions between trial and real-world reference survival substantially affected survival and cost-effectiveness estimates. Modeling time-dependent vs proportional relative effects affected estimates to a lesser extent, dependent on assumptions used in cost-effectiveness modeling. Conclusions: Appropriately capturing reference treatment survival when attempting to generalize trial-derived relative treatment effects to real-world target populations can have important impacts on cost-effectiveness estimates. A balance between model complexity and adequacy for decision making should be considered where multiple data sources with complex hazards are being evaluated.</p

    Applying Trial-Derived Treatment Effects to Real-World Populations:Generalizing Cost-Effectiveness Estimates When Modeling Complex Hazards

    Get PDF
    Objectives: Generalizability of trial-based cost-effectiveness estimates to real-world target populations is important for decision making. In the context of independent aggregate time-to-event baseline and relative effects data, complex hazards can make modeling of data for use in economic evaluation challenging. Our article provides an overview of methods that can be used to apply trial-derived relative treatment effects to external real-world baselines when faced with complex hazards and follows with a motivating example. Methods: Approaches for applying trial-derived relative effects to real-world baselines are presented in the context of complex hazards. Appropriate methods are applied in a cost-effectiveness analysis using data from a previously published study assessing the real-world cost-effectiveness of a treatment for carcinoma of the head and neck as a motivating example. Results: Lack of common hazards between the trial and target real-world population, a complex baseline hazard function, and nonproportional relative effects made the use of flexible models necessary to adequately estimate survival. Assuming common distributions between trial and real-world reference survival substantially affected survival and cost-effectiveness estimates. Modeling time-dependent vs proportional relative effects affected estimates to a lesser extent, dependent on assumptions used in cost-effectiveness modeling. Conclusions: Appropriately capturing reference treatment survival when attempting to generalize trial-derived relative treatment effects to real-world target populations can have important impacts on cost-effectiveness estimates. A balance between model complexity and adequacy for decision making should be considered where multiple data sources with complex hazards are being evaluated.</p

    The impact of the UK COVID-19 lockdown on the screening, diagnostics and incidence of breast, colorectal, lung and prostate cancer in the UK: a population-based cohort study

    Get PDF
    Introduction: The COVID-19 pandemic had collateral effects on many health systems. Cancer screening and diagnostic tests were postponed, resulting in delays in diagnosis and treatment. This study assessed the impact of the pandemic on screening, diagnostics and incidence of breast, colorectal, lung, and prostate cancer; and whether rates returned to pre-pandemic levels by December, 2021. Methods: This is a cohort study of electronic health records from the United Kingdom (UK) primary care Clinical Practice Research Datalink (CPRD) GOLD database. The study included individuals registered with CPRD GOLD between January, 2017 and December, 2021, with at least 365 days of clinical history. The study focused on screening, diagnostic tests, referrals and diagnoses of first-ever breast, colorectal, lung, and prostate cancer. Incidence rates (IR) were stratified by age, sex, and region, and incidence rate ratios (IRR) were calculated to compare rates during and after lockdown with rates before lockdown. Forecasted rates were estimated using negative binomial regression models. Results: Among 5,191,650 eligible participants, the first lockdown resulted in reduced screening and diagnostic tests for all cancers, which remained dramatically reduced across the whole observation period for almost all tests investigated. There were significant IRR reductions in breast (0.69 [95% CI: 0.63-0.74]), colorectal (0.74 [95% CI: 0.67-0.81]), and prostate (0.71 [95% CI: 0.66-0.78]) cancer diagnoses. IRR reductions for lung cancer were non-significant (0.92 [95% CI: 0.84-1.01]). Extrapolating to the entire UK population, an estimated 18,000 breast, 13,000 colorectal, 10,000 lung, and 21,000 prostate cancer diagnoses were missed from March, 2020 to December, 2021. Discussion: The UK COVID-19 lockdown had a substantial impact on cancer screening, diagnostic tests, referrals, and diagnoses. Incidence rates remained significantly lower than pre-pandemic levels for breast and prostate cancers and associated tests by December, 2021. Delays in diagnosis are likely to have adverse consequences on cancer stage, treatment initiation, mortality rates, and years of life lost. Urgent strategies are needed to identify undiagnosed cases and address the long-term implications of delayed diagnoses

    Long-term risk of psychiatric disorder and psychotropic prescription after SARS-CoV-2 infection among UK general population.

    Get PDF
    Despite evidence indicating increased risk of psychiatric issues among COVID-19 survivors, questions persist about long-term mental health outcomes and the protective effect of vaccination. Using UK Biobank data, three cohorts were constructed: SARS-CoV-2 infection (n = 26,101), contemporary control with no evidence of infection (n = 380,337) and historical control predating the pandemic (n = 390,621). Compared with contemporary controls, infected participants had higher subsequent risks of incident mental health at 1 year (hazard ratio (HR): 1.54, 95% CI 1.42-1.67; P = 1.70 × 10-24; difference in incidence rate: 27.36, 95% CI 21.16-34.10 per 1,000 person-years), including psychotic, mood, anxiety, alcohol use and sleep disorders, and prescriptions for antipsychotics, antidepressants, benzodiazepines, mood stabilizers and opioids. Risks were higher for hospitalized individuals (2.17, 1.70-2.78; P = 5.80 × 10-10) than those not hospitalized (1.41, 1.30-1.53; P = 1.46 × 10-16), and were reduced in fully vaccinated people (0.97, 0.80-1.19; P = 0.799) compared with non-vaccinated or partially vaccinated individuals (1.64, 1.49-1.79; P = 4.95 × 10-26). Breakthrough infections showed similar risk of psychiatric diagnosis (0.91, 0.78-1.07; P = 0.278) but increased prescription risk (1.42, 1.00-2.02; P = 0.053) compared with uninfected controls. Early identification and treatment of psychiatric disorders in COVID-19 survivors, especially those severely affected or unvaccinated, should be a priority in the management of long COVID. With the accumulation of breakthrough infections in the post-pandemic era, the findings highlight the need for continued optimization of strategies to foster resilience and prevent escalation of subclinical mental health symptoms to severe disorders

    Risk of hip, subtrochanteric, and femoral shaft fractures among mid and long term users of alendronate: nationwide cohort and nested case-control study

    Get PDF
    Objectives To determine the skeletal safety and efficacy of long term (≥10 years) alendronate use in patients with osteoporosis. Design Open register based cohort study containing two nested case control studies. Setting Nationwide study of population of Denmark. Participants 61 990 men and women aged 50-94 at the start of treatment, who had not previously taken alendronate, 1996-2007. Interventions Treatment with alendronate. Main outcome measures Incident fracture of the subtrochanteric femur or femoral shaft (ST/FS) or the hip. Non-fracture controls from the cohort were matched to fracture cases by sex, year of birth, and year of initiation of alendronate treatment. Conditional logistic regression models were fitted to calculate odds ratios with and without adjustment for comorbidity and comedications. Sensitivity analyses investigated subsequent treatment with other drugs for osteoporosis. Results 1428 participants sustained a ST/FS (incidence rate 3.4/1000 person years, 95% confidence interval 3.2 to 3.6), and 6784 sustained a hip fracture (16.2/1000 person years, 15.8 to 16.6). The risk of ST/FS was lower with high adherence to treatment with alendronate (medication possession ratio (MPR, a proxy for compliance) >80%) compared with poor adherence (MPR 80% was associated with a decreased risk of hip fracture (0.73, 0.68 to 0.78; P<0.001) as was longer term cumulative use for 5-10 dose years (0.74, 0.67 to 0.83; P<0.001) or ≥10 dose years (0.74, 0.56 to 0.97; P=0.03). Conclusions These findings support an acceptable balance between benefit and risk with treatment with alendronate in terms of fracture outcomes, even for over 10 years of continuous use

    Random effects modelling versus logistic regression for the inclusion of cluster-level covariates in propensity score estimation: a Monte Carlo simulation and registry cohort analysis

    Get PDF
    Purpose: Surgeon and hospital-related features, such as volume, can be associated with treatment choices and outcomes. Accounting for these covariates with propensity score (PS) analysis can be challenging due to the clustered nature of the data. We studied six different PS estimation strategies for clustered data using random effects modelling (REM) compared with logistic regression. Methods: Monte Carlo simulations were used to generate variable cluster-level confounding intensity [odds ratio (OR) = 1.01–2.5] and cluster size (20–1,000 patients per cluster). The following PS estimation strategies were compared: i) logistic regression omitting cluster-level confounders; ii) logistic regression including cluster-level confounders; iii) the same as ii) but including cross-level interactions; iv), v), and vi), similar to i), ii), and iii), respectively, but using REM instead of logistic regression. The same strategies were tested in a trial emulation of partial versus total knee replacement (TKR) surgery, where observational versus trial-based estimates were compared as a proxy for bias. Performance metrics included bias and mean square error (MSE). Results: In most simulated scenarios, logistic regression, including cluster-level confounders, led to the lowest bias and MSE, for example, with 50 clusters × 200 individuals and confounding intensity OR = 1.5, a relative bias of 10%, and MSE of 0.003 for (i) compared to 32% and 0.010 for (iv). The results from the trial emulation also gave similar trends. Conclusion: Logistic regression, including patient and surgeon-/hospital-level confounders, appears to be the preferred strategy for PS estimation

    Risk of adverse events following the initiation of antihypertensives in older people with complex health needs:a self-controlled case series in the United Kingdom

    Get PDF
    BACKGROUND: We assessed the risk of adverse events-severe acute kidney injury (AKI), falls and fractures-associated with use of antihypertensives in older patients with complex health needs (CHN). SETTING: UK primary care linked to inpatient and mortality records. METHODS: The source population comprised patients aged &gt;65, with ≥1 year of registration and unexposed to antihypertensives in the year before study start. We identified three cohorts of patients with CHN, namely, unplanned hospitalisations, frailty (electronic frailty index deficit count ≥3) and polypharmacy (prescription of ≥10 medicines). Patients in any of these cohorts were included in the CHN cohort. We conducted self-controlled case series for each cohort and outcome (AKI, falls, fractures). Incidence rate ratios (IRRs) were estimated by dividing event rates (i) during overall antihypertensive exposed patient-time over unexposed patient-time; and (ii) in the first 30 days after treatment initiation over unexposed patient-time. RESULTS:Among 42,483 patients in the CHN cohort, 7,240, 5,164 and 450 individuals had falls, fractures or AKI, respectively. We observed an increased risk for AKI associated with exposure to antihypertensives across all cohorts (CHN: IRR 2.36 [95% CI: 1.68-3.31]). In the 30 days post-antihypertensive treatment initiation, a 35-50% increased risk for falls was found across all cohorts and increased fracture risk in the frailty cohort (IRR 1.38 [1.03-1.84]). No increased risk for falls/fractures was associated with continuation of antihypertensive treatment or overall use. CONCLUSION: Treatment with antihypertensives in older patients was associated with increased risk of AKI and transiently elevated risk of falls in the 30 days after starting antihypertensive therapy.</p
    corecore