185 research outputs found

    Accounting for centre-effects in multicentre trials with a binary outcome – when, why, and how?

    Get PDF
    Open Access Research article Accounting for centre-effects in multicentre trials with a binary outcome – when, why, and how? Brennan C Kahan Correspondence: Brennan C Kahan [email protected] Author Affiliations Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner Street, London E1 2AB, UK MRC Clinical Trials Unit at UCL, 125 Kingsway, London WC2B 6NH, UK BMC Medical Research Methodology 2014, 14:20 doi:10.1186/1471-2288-14-20 The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1471-2288/14/20 Received: 5 July 2013 Accepted: 3 February 2014 Published: 10 February 2014 Β© 2014 Kahan; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. Formula display: Abstract Background It is often desirable to account for centre-effects in the analysis of multicentre randomised trials, however it is unclear which analysis methods are best in trials with a binary outcome. Methods We compared the performance of four methods of analysis (fixed-effects models, random-effects models, generalised estimating equations (GEE), and Mantel-Haenszel) using a re-analysis of a previously reported randomised trial (MIST2) and a large simulation study. Results The re-analysis of MIST2 found that fixed-effects and Mantel-Haenszel led to many patients being dropped from the analysis due to over-stratification (up to 69% dropped for Mantel-Haenszel, and up to 33% dropped for fixed-effects). Conversely, random-effects and GEE included all patients in the analysis, however GEE did not reach convergence. Estimated treatment effects and p-values were highly variable across different analysis methods. The simulation study found that most methods of analysis performed well with a small number of centres. With a large number of centres, fixed-effects led to biased estimates and inflated type I error rates in many situations, and Mantel-Haenszel lost power compared to other analysis methods in some situations. Conversely, both random-effects and GEE gave nominal type I error rates and good power across all scenarios, and were usually as good as or better than either fixed-effects or Mantel-Haenszel. However, this was only true for GEEs with non-robust standard errors (SEs); using a robust β€˜sandwich’ estimator led to inflated type I error rates across most scenarios. Conclusions With a small number of centres, we recommend the use of fixed-effects, random-effects, or GEE with non-robust SEs. Random-effects and GEE with non-robust SEs should be used with a moderate or large number of centres

    Redressing the balance: Covariate adjustment in randomised trials

    Get PDF

    Accounting for centre-effects in multicentre trials with a binary outcome - when, why, and how?

    Get PDF
    BACKGROUND: It is often desirable to account for centre-effects in the analysis of multicentre randomised trials, however it is unclear which analysis methods are best in trials with a binary outcome. METHODS: We compared the performance of four methods of analysis (fixed-effects models, random-effects models, generalised estimating equations (GEE), and Mantel-Haenszel) using a re-analysis of a previously reported randomised trial (MIST2) and a large simulation study. RESULTS: The re-analysis of MIST2 found that fixed-effects and Mantel-Haenszel led to many patients being dropped from the analysis due to over-stratification (up to 69% dropped for Mantel-Haenszel, and up to 33% dropped for fixed-effects). Conversely, random-effects and GEE included all patients in the analysis, however GEE did not reach convergence. Estimated treatment effects and p-values were highly variable across different analysis methods. The simulation study found that most methods of analysis performed well with a small number of centres. With a large number of centres, fixed-effects led to biased estimates and inflated type I error rates in many situations, and Mantel-Haenszel lost power compared to other analysis methods in some situations. Conversely, both random-effects and GEE gave nominal type I error rates and good power across all scenarios, and were usually as good as or better than either fixed-effects or Mantel-Haenszel. However, this was only true for GEEs with non-robust standard errors (SEs); using a robust β€˜sandwich’ estimator led to inflated type I error rates across most scenarios. CONCLUSIONS: With a small number of centres, we recommend the use of fixed-effects, random-effects, or GEE with non-robust SEs. Random-effects and GEE with non-robust SEs should be used with a moderate or large number of centres

    Risk of selection bias in randomised trials

    Get PDF
    Background: Selection bias occurs when recruiters selectively enrol patients into the trial based on what the next treatment allocation is likely to be. This can occur even if appropriate allocation concealment is used if recruiters can guess the next treatment assignment with some degree of accuracy. This typically occurs in unblinded trials when restricted randomisation is implemented to force the number of patients in each arm or within each centre to be the same. Several methods to reduce the risk of selection bias have been suggested; however, it is unclear how often these techniques are used in practice. Methods: We performed a review of published trials which were not blinded to assess whether they utilised methods for reducing the risk of selection bias. We assessed the following techniques: (a) blinding of recruiters; (b) use of simple randomisation; (c) avoidance of stratification by site when restricted randomisation is used; (d) avoidance of permuted blocks if stratification by site is used; and (e) incorporation of prognostic covariates into the randomisation procedure when restricted randomisation is used. We included parallel group, individually randomised phase III trials published in four general medical journals (BMJ, Journal of the American Medical Association, The Lancet, and New England Journal of Medicine) in 2010. Results: We identified 152 eligible trials. Most trials (98%) provided no information on whether recruiters were blind to previous treatment allocations. Only 3% of trials used simple randomisation; 63% used some form of restricted randomisation, and 35% did not state the method of randomisation. Overall, 44% of trials were stratified by site of recruitment; 27% were not, and 29% did not report this information. Most trials that did stratify by site of recruitment used permuted blocks (58%), and only 15% reported using random block sizes. Many trials that used restricted randomisation also included prognostic covariates in the randomisation procedure (56%). Conclusions: The risk of selection bias could not be ascertained for most trials due to poor reporting. Many trials which did provide details on the randomisation procedure were at risk of selection bias due to a poorly chosen randomisation methods. Techniques to reduce the risk of selection bias should be more widely implemented

    Adjusting for multiple prognostic factors in the analysis of randomised trials

    Get PDF
    Background: When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods: We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results: Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions: It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large sample sizes, however treating strata as random effects should be the analysis method of choice with binary or time-to-event outcomes and a small sample size

    A re-randomisation design for clinical trials

    Get PDF
    Background: Recruitment to clinical trials is often problematic, with many trials failing to recruit to their target sample size. As a result, patient care may be based on suboptimal evidence from underpowered trials or non-randomised studies. Methods: For many conditions patients will require treatment on several occasions, for example, to treat symptoms of an underlying chronic condition (such as migraines, where treatment is required each time a new episode occurs), or until they achieve treatment success (such as fertility, where patients undergo treatment on multiple occasions until they become pregnant). We describe a re-randomisation design for these scenarios, which allows each patient to be independently randomised on multiple occasions. We discuss the circumstances in which this design can be used. Results: The re-randomisation design will give asymptotically unbiased estimates of treatment effect and correct type I error rates under the following conditions: (a) patients are only re-randomised after the follow-up period from their previous randomisation is complete; (b) randomisations for the same patient are performed independently; and (c) the treatment effect is constant across all randomisations. Provided the analysis accounts for correlation between observations from the same patient, this design will typically have higher power than a parallel group trial with an equivalent number of observations. Conclusions: If used appropriately, the re-randomisation design can increase the recruitment rate for clinical trials while still providing an unbiased estimate of treatment effect and correct type I error rates. In many situations, it can increase the power compared to a parallel group design with an equivalent number of observations

    A comparison of methods to adjust for continuous covariates in the analysis of randomised trials

    Get PDF
    BACKGROUND: Although covariate adjustment in the analysis of randomised trials can be beneficial, adjustment for continuous covariates is complicated by the fact that the association between covariate and outcome must be specified. Misspecification of this association can lead to reduced power, and potentially incorrect conclusions regarding treatment efficacy. METHODS: We compared several methods of adjustment to determine which is best when the association between covariate and outcome is unknown. We assessed (a) dichotomisation or categorisation; (b) assuming a linear association with outcome; (c) using fractional polynomials with one (FP1) or two (FP2) polynomial terms; and (d) using restricted cubic splines with 3 or 5 knots. We evaluated each method using simulation and through a re-analysis of trial datasets. RESULTS: Methods which kept covariates as continuous typically had higher power than methods which used categorisation. Dichotomisation, categorisation, and assuming a linear association all led to large reductions in power when the true association was non-linear. FP2 models and restricted cubic splines with 3 or 5 knots performed best overall. CONCLUSIONS: For the analysis of randomised trials we recommend (1) adjusting for continuous covariates even if their association with outcome is unknown; (2) keeping covariates as continuous; and (3) using fractional polynomials with two polynomial terms or restricted cubic splines with 3 to 5 knots when a linear association is in doubt

    Update on the transfusion in gastrointestinal bleeding (TRIGGER) trial: statistical analysis plan for a cluster-randomised feasibility trial

    Get PDF
    BACKGROUND: Previous research has suggested an association between more liberal red blood cell (RBC) transfusion and greater risk of further bleeding and mortality following acute upper gastrointestinal bleeding (AUGIB). METHODS AND DESIGN: The Transfusion in Gastrointestinal Bleeding (TRIGGER) trial is a pragmatic cluster-randomised feasibility trial which aims to evaluate the feasibility of implementing a restrictive vs. liberal RBC transfusion policy for adult patients admitted to hospital with AUGIB in the UK. This trial will help to inform the design and methodology of a phase III trial. The protocol for TRIGGER has been published in Transfusion Medicine Reviews. Recruitment began in September 2012 and was completed in March 2013. This update presents the statistical analysis plan, detailing how analysis of the TRIGGER trial will be performed. It is hoped that prospective publication of the full statistical analysis plan will increase transparency and give readers a clear overview of how TRIGGER will be analysed. TRIAL REGISTRATION: ISRCTN85757829

    Evidence of unexplained discrepancies between planned and conducted statistical analyses: a review of randomized trials

    Get PDF
    Evidence of unexplained discrepancies between planned and conducted statistical analyses: a review of randomised trial

    Public availability and adherence to prespecified statistical analysis approaches was low in published randomized trials

    Get PDF
    BACKGROUND AND OBJECTIVE: Prespecification of statistical methods in clinical trial protocols and statistical analysis plans can help to deter bias from p-hacking but is only effective if the prespecified approach is made available. STUDY DESIGN AND SETTING: For 100 randomized trials published in 2018 and indexed in PubMed, we evaluated how often a prespecified statistical analysis approach for the trial's primary outcome was publicly available. For each trial with an available prespecified analysis, we compared this with the trial publication to identify whether there were unexplained discrepancies. RESULTS: Only 12 of 100 trials (12%) had a publicly available prespecified analysis approach for their primary outcome; this document was dated before recruitment began for only two trials. Of the 12 trials with an available prespecified analysis approach, 11 (92%) had one or more unexplained discrepancies. Only 4 of 100 trials (4%) stated that the statistician was blinded until the SAP was signed off, and only 10 of 100 (10%) stated the statistician was blinded until the database was locked. CONCLUSION: For most published trials, there is insufficient information available to determine whether the results may be subject to p-hacking. Where information was available, there were often unexplained discrepancies between the prespecified and final analysis methods
    • …
    corecore