350 research outputs found

    Use of external evidence for design and Bayesian analysis of clinical trials:a qualitative study of trialistsā€™ views

    Get PDF
    Abstract Background Evidence from previous studies is often used relatively informally in the design of clinical trials: for example, a systematic review to indicate whether a gap in the current evidence base justifies a new trial. External evidence can be used more formally in both trial design and analysis, by explicitly incorporating a synthesis of it in a Bayesian framework. However, it is unclear how common this is in practice or the extent to which it is considered controversial. In this qualitative study, we explored attitudes towards, and experiences of, trialists in incorporating synthesised external evidence through the Bayesian design or analysis of a trial. Methods Semi-structured interviews were conducted with 16 trialists: 13 statisticians and three clinicians. Participants were recruited across several universities and trials units in the UnitedĀ Kingdom using snowball and purposeful sampling. Data were analysed using thematic analysis and techniques of constant comparison. Results Trialists used existing evidence in many ways in trial design, for example, to justify a gap in the evidence base and inform parameters in sample size calculations. However, no one in our sample reported using such evidence in a Bayesian framework. Participants tended to equate Bayesian analysis with the incorporation of prior information on the intervention effect and were less aware of the potential to incorporate data on other parameters. When introduced to the concepts, many trialists felt they could be making more use of existing data to inform the design and analysis of a trial in particular scenarios. For example, some felt existing data could be used more formally to inform background adverse event rates, rather than relying on clinical opinion as to whether there are potential safety concerns. However, several barriers to implementing these methods in practice were identified, including concerns about the relevance of external data, acceptability of Bayesian methods, lack of confidence in Bayesian methods and software, and practical issues, such as difficulties accessing relevant data. Conclusions Despite trialists recognising that more formal use of external evidence could be advantageous over current approaches in some areas and useful as sensitivity analyses, there are still barriers to such use in practice

    Meta-analysis of diagnostic accuracy studies with multiple thresholds ā€“ comparison of different approaches

    Get PDF
    Methods for standard meta-analysis of diagnostic test accuracy studies are well established and understood. For the more complex case in which studies report test accuracy across multiple thresholds, several approaches have recently been proposed. These are based on similar ideas, but make different assumptions. In this article, we apply four different approaches to data from a recent systematic review in the area of nephrology and compare the results. The four approaches use: a linear mixed effects model, a Bayesian multinomial random effects model, a time-to-event model and a nonparametric model, respectively. In the case study data, the accuracy of neutrophil gelatinase-associated lipocalin for the diagnosis of acute kidney injury was assessed in different scenarios, with sensitivity and specificity estimates available for three thresholds in each primary study. All approaches led to plausible and mostly similar summary results. However, we found considerable differences in results for some scenarios, for example, differences in the area under the receiver operating characteristic curve (AUC) of up to 0.13. The Bayesian approach tended to lead to the highest values of the AUC, and the nonparametric approach tended to produce the lowest values across the different scenarios. Though we recommend using these approaches, our findings motivate the need for a simulation study to explore optimal choice of method in various scenarios

    Problem drug use prevalence estimation revisited:heterogeneity in capture-recapture and the role of external evidence

    Get PDF
    BACKGROUND AND AIMS: Captureā€“recapture (CRC) analysis is recommended for estimating the prevalence of problem drug use or people who inject drugs (PWID). We aim to demonstrate how naive application of CRC can lead to highly misleading results, and to suggest how the problems might be overcome. METHODS: We present a case study of estimating the prevalence of PWID in Bristol, UK, applying CRC to lists in contact with three services. We assess: (i) sensitivity of results to different versions of the dominant (treatment) list: specifically, to inclusion of nonā€incident cases and of those who were referred directly from one of the other services; (ii) the impact of accounting for a novel covariate, housing instability; and (iii) consistency of CRC estimates with drugā€related mortality data. We then incorporate formally the drugā€related mortality data and lower bounds for prevalence alongside the CRC into a single coherent model. RESULTS: Five of 11 models fitted the full data equally well but generated widely varying prevalence estimates, from 2740 [95% confidence interval (CI)Ā =Ā 2670, 2840] to 6890 (95% CIĀ =Ā 3740, 17680). Results were highly sensitive to inclusion of nonā€incident cases, demonstrating the presence of considerable heterogeneity, and were sensitive to a lesser extent to inclusion of direct referrals. A reduced data set including only incident cases and excluding referrals could be fitted by simpler models, and led to much greater consistency in estimates. Accounting for housing stability improved model fit considerably more than did the standard covariates of age and gender. External data provided validation of results and aided model selection, generating a final estimate of the number of PWID in Bristol in 2011 of 2770 [95% credible interval (Crā€I)Ā =Ā 2570, 3110] or 0.9% (95% Crā€IĀ =Ā 0.9, 1.0%) of the population aged 15ā€“64 years. CONCLUSIONS: Steps can be taken to reduce bias in captureā€“recapture analysis, including: careful consideration of data sources, reduction of lists to less heterogeneous subsamples, use of covariates and formal incorporation of external data

    Recapture or precapture? Fallibility of standard capture-recapture methods in the presence of referrals between sources.

    Get PDF
    Capture-recapture methods, largely developed in ecology, are now commonly used in epidemiology to adjust for incomplete registries and to estimate the size of difficult-to-reach populations such as problem drug users. Overlapping lists of individuals in the target population, taken from administrative data sources, are considered analogous to overlapping "captures" of animals. Log-linear models, incorporating interaction terms to account for dependencies between sources, are used to predict the number of unobserved individuals and, hence, the total population size. A standard assumption to ensure parameter identifiability is that the highest-order interaction term is 0. We demonstrate that, when individuals are referred directly between sources, this assumption will often be violated, and the standard modeling approach may lead to seriously biased estimates. We refer to such individuals as having been "precaptured," rather than truly recaptured. Although sometimes an alternative identifiable log-linear model could accommodate the referral structure, this will not always be the case. Further, multiple plausible models may fit the data equally well but provide widely varying estimates of the population size. We demonstrate an alternative modeling approach, based on an interpretable parameterization and driven by careful consideration of the relationships between the sources, and we make recommendations for capture-recapture in practice

    Between-trial heterogeneity in meta-analyses may be partially explainedĀ by reported design characteristics.

    Get PDF
    OBJECTIVE: We investigated the associations between risk of bias judgments from Cochrane reviews for sequence generation, allocation concealment and blinding, and between-trial heterogeneity. STUDY DESIGN AND SETTING: Bayesian hierarchical models were fitted to binary data from 117 meta-analyses, to estimate the ratio Ī» by which heterogeneity changes for trials at high/unclear risk of bias compared with trials at low risk of bias. We estimated the proportion of between-trial heterogeneity in each meta-analysis that could be explained by the bias associated with specific design characteristics. RESULTS: Univariable analyses showed that heterogeneity variances were, on average, increased among trials at high/unclear risk of bias for sequence generation (Ī»Ė† 1.14, 95% interval: 0.57-2.30) and blinding (Ī»Ė† 1.74, 95% interval: 0.85-3.47). Trials at high/unclear risk of bias for allocation concealment were on average less heterogeneous (Ī»Ė† 0.75, 95% interval: 0.35-1.61). Multivariable analyses showed that a median of 37% (95% interval: 0-71%) heterogeneity variance could be explained by trials at high/unclear risk of bias for sequence generation, allocation concealment, and/or blinding. All 95% intervals for changes in heterogeneity were wide and included the null of no difference. CONCLUSION: Our interpretation of the results is limited by imprecise estimates. There is some indication that between-trial heterogeneity could be partially explained by reported design characteristics, and hence adjustment for bias could potentially improve accuracy of meta-analysis results

    Determining the presence of host specific toxin genes, ToxA and ToxB, in New Zealand Pyrenophora tritici-repentis isolates, and susceptibility of wheat cultivars

    Get PDF
    Tan spot, caused by Pyrenophora tritici-repentis (Ptr), is an important disease of wheat worldwide, and an emerging issue in New Zealand. The pathogen produces host-specific toxins which interact with the wheat host sensitivity loci. Identification of the prevalence of the toxin encoding genes in the local population, and the susceptibility of commonly grown wheat cultivars to Ptr will aid selection of wheat cultivars to reduce disease risk. Twelve single spore isolates collected from wheat-growing areas of the South Island of New Zealand representing the P. tritici-repentis population were characterised for the Ptr ToxA and ToxB genes, ToxA and ToxB, respectively, using two gene specific primers. The susceptibility of 10 wheat cultivars to P. tritici-repentis was determined in a glasshouse experiment by inoculating young plants with a mixed-isolate spore inoculum. All 12 New Zealand P. tritici-repentis isolates were positive for the ToxA gene but none were positive for the ToxB gene. Tan spot lesions developed on all inoculated 10 wheat cultivars, with cultivars ā€˜Empressā€™ and ā€˜Duchessā€™ being the least susceptible and ā€˜Discoveryā€™, ā€˜Relianceā€™ and ā€˜Saracenā€™ the most susceptible cultivars to infection by the mixed-isolate spore inoculum used. The results indicated that the cultivars ā€˜Empressā€™ and ā€˜Duchessā€™ may possess a level of tolerance to P. tritici-repentis and would, therefore, be recommended for cultivation in regions with high tan spot incidence

    Long term cost effectiveness of interventions for obesity:A Mendelian randomisation study

    Get PDF
    Background The prevalence of obesity has increased in the United Kingdom, and reliably measuring the impact on quality of life and the total healthcare cost from obesity is key to informing the cost-effectiveness of interventions that target obesity, and determining healthcare funding. Current methods for estimating cost-effectiveness of interventions for obesity may be subject to confounding and reverse causation. The aim of this study is to apply a new approach using mendelian randomisation for estimating the cost-effectiveness of interventions that target body mass index (BMI), which may be less affected by confounding and reverse causation than previous approaches. Methods and findings We estimated health-related quality-adjusted life years (QALYs) and both primary and secondary healthcare costs for 310,913 men and women of white British ancestry aged between 39 and 72 years in UK Biobank between recruitment (2006 to 2010) and 31 March 2017. We then estimated the causal effect of differences in BMI on QALYs and total healthcare costs using mendelian randomisation. For this, we used instrumental variable regression with a polygenic risk score (PRS) for BMI, derived using a genome-wide association study (GWAS) of BMI, with age, sex, recruitment centre, and 40 genetic principal components as covariables to estimate the effect of a unit increase in BMI on QALYs and total healthcare costs. Finally, we used simulations to estimate the likely effect on BMI of policy relevant interventions for BMI, then used the mendelian randomisation estimates to estimate the cost-effectiveness of these interventions. A unit increase in BMI decreased QALYs by 0.65% of a QALY (95% confidence interval [CI]: 0.49% to 0.81%) per year and increased annual total healthcare costs by Ā£42.23 (95% CI: Ā£32.95 to Ā£51.51) per person. When considering only health conditions usually considered in previous cost-effectiveness modelling studies (cancer, cardiovascular disease, cerebrovascular disease, and type 2 diabetes), we estimated that a unit increase in BMI decreased QALYs by only 0.16% of a QALY (95% CI: 0.10% to 0.22%) per year. We estimated that both laparoscopic bariatric surgery among individuals with BMI greater than 35 kg/m2, and restricting volume promotions for high fat, salt, and sugar products, would increase QALYs and decrease total healthcare costs, with net monetary benefits (at Ā£20,000 per QALY) of Ā£13,936 (95% CI: Ā£8,112 to Ā£20,658) per person over 20 years, and Ā£546 million (95% CI: Ā£435 million to Ā£671 million) in total per year, respectively. The main limitations of this approach are that mendelian randomisation relies on assumptions that cannot be proven, including the absence of directional pleiotropy, and that genotypes are independent of confounders. Conclusions Mendelian randomisation can be used to estimate the impact of interventions on quality of life and healthcare costs. We observed that the effect of increasing BMI on health-related quality of life is much larger when accounting for 240 chronic health conditions, compared with only a limited selection. This means that previous cost-effectiveness studies have likely underestimated the effect of BMI on quality of life and, therefore, the potential cost-effectiveness of interventions to reduce BMI
    • ā€¦
    corecore