52 research outputs found

    Balance algorithm for cluster randomized trials

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Within cluster randomized trials no algorithms exist to generate a full enumeration of a block randomization, balancing for covariates across treatment arms. Furthermore, often for practical reasons multiple blocks are required to fully randomize a study, which may not have been well balanced within blocks.</p> <p>Results</p> <p>We present a convenient and easy to use randomization tool to undertake allocation concealed block randomization. Our algorithm highlights allocations that minimize imbalance between treatment groups across multiple baseline covariates.</p> <p>We demonstrate the algorithm using a cluster randomized trial in primary care (the PRE-EMPT Study) and show that the software incorporates a trade off between independent random allocations that were likely to be imbalanced, and predictable deterministic approaches that would minimise imbalance. We extend the methodology of single block randomization to allocate to multiple blocks conditioning on previous allocations.</p> <p>Conclusion</p> <p>The algorithm is included as Additional file <supplr sid="S1">1</supplr> and we advocate its use for robust randomization within cluster randomized trials.</p> <suppl id="S1"> <title> <p>Additional File 1</p> </title> <text> <p><b>Cluster randomization allocation algorithm version 1.</b> Algorithms scripted in R to provide robust cluster randomization.</p> </text> <file name="1471-2288-8-65-S1.zip"> <p>Click here for file</p> </file> </suppl

    Reporting on covariate adjustment in randomised controlled trials before and after revision of the 2001 CONSORT statement: a literature review

    Get PDF
    <p>Abstract</p> <p>Objectives</p> <p>To evaluate the use and reporting of adjusted analysis in randomised controlled trials (RCTs) and compare the quality of reporting before and after the revision of the CONSORT Statement in 2001.</p> <p>Design</p> <p>Comparison of two cross sectional samples of published articles.</p> <p>Data Sources</p> <p>Journal articles indexed on PubMed in December 2000 and December 2006.</p> <p>Study Selection</p> <p>Parallel group RCTs with a full publication carried out in humans and published in English</p> <p>Main outcome measures</p> <p>Proportion of articles reported adjusted analysis; use of adjusted analysis; the reason for adjustment; the method of adjustment and the reporting of adjusted analysis results in the main text and abstract.</p> <p>Results</p> <p>In both cohorts, 25% of studies reported adjusted analysis (84/355 in 2000 vs 113/422 in 2006). Compared with articles reporting only unadjusted analyses, articles that reported adjusted analyses were more likely to specify primary outcomes, involve multiple centers, perform stratified randomization, be published in general medical journals, and recruit larger sample sizes. In both years a minority of articles explained why and how covariates were selected for adjustment (20% to 30%). Almost all articles specified the statistical methods used for adjustment (99% in 2000 vs 100% in 2006) but only 5% and 10%, respectively, reported both adjusted and unadjusted results as recommended in the CONSORT guidelines.</p> <p>Conclusion</p> <p>There was no evidence of change in the reporting of adjusted analysis results five years after the revision of the CONSORT Statement and only a few articles adhered fully to the CONSORT recommendations.</p

    Multiplicity:discussion points from the PSI multiplicity expert group

    No full text
    In May 2012, the Committee of Health and Medicinal Products issued a concept paper on the need to review the points to consider document on multiplicity issues in clinical trials. In preparation for the release of the updated guidance document, Statisticians in the Pharmaceutical Industry held a one-day expert group meeting in January 2013. Topics debated included multiplicity and the drug development process, the usefulness and limitations of newly developed strategies to deal with multiplicity, multiplicity issues arising from interim decisions and multiregional development, and the need for simultaneous confidence intervals (CIs) corresponding to multiple test procedures. A clear message from the meeting was that multiplicity adjustments need to be considered when the intention is to make a formal statement about efficacy or safety based on hypothesis tests. Statisticians have a key role when designing studies to assess what adjustment really means in the context of the research being conducted. More thought during the planning phase needs to be given to multiplicity adjustments for secondary endpoints given these are increasing in importance in differentiating products in the market place. No consensus was reached on the role of simultaneous CIs in the context of superiority trials. It was argued that unadjusted intervals should be employed as the primary purpose of the intervals is estimation, while the purpose of hypothesis testing is to formally establish an effect. The opposing view was that CIs should correspond to the test decision whenever possible
    corecore