23 research outputs found

    A re-randomisation design for clinical trials

    Get PDF
    Background: Recruitment to clinical trials is often problematic, with many trials failing to recruit to their target sample size. As a result, patient care may be based on suboptimal evidence from underpowered trials or non-randomised studies. Methods: For many conditions patients will require treatment on several occasions, for example, to treat symptoms of an underlying chronic condition (such as migraines, where treatment is required each time a new episode occurs), or until they achieve treatment success (such as fertility, where patients undergo treatment on multiple occasions until they become pregnant). We describe a re-randomisation design for these scenarios, which allows each patient to be independently randomised on multiple occasions. We discuss the circumstances in which this design can be used. Results: The re-randomisation design will give asymptotically unbiased estimates of treatment effect and correct type I error rates under the following conditions: (a) patients are only re-randomised after the follow-up period from their previous randomisation is complete; (b) randomisations for the same patient are performed independently; and (c) the treatment effect is constant across all randomisations. Provided the analysis accounts for correlation between observations from the same patient, this design will typically have higher power than a parallel group trial with an equivalent number of observations. Conclusions: If used appropriately, the re-randomisation design can increase the recruitment rate for clinical trials while still providing an unbiased estimate of treatment effect and correct type I error rates. In many situations, it can increase the power compared to a parallel group design with an equivalent number of observations

    A study of target effect sizes in randomised controlled trials published in the Health Technology Assessment journal

    Get PDF
    BACKGROUND: When designing a randomised controlled trial (RCT), an important consideration is the sample size required. This is calculated from several components; one of which is the target difference. This study aims to review the currently reported methods of elicitation of the target difference as well as to quantify the target differences used in Health Technology Assessment (HTA)-funded trials. METHODS: Trials were identified from the National Institute of Health Research Health Technology Assessment journal. A total of 177 RCTs published between 2006 and 2016 were assessed for eligibility. Eligibility was established by the design of the trial and the quality of data available. The trial designs were parallel-group, superiority RCTs with a continuous primary endpoint. Data were extracted and the standardised anticipated and observed effect size estimates were calculated. Exclusion criteria was based on trials not providing enough detail in the sample size calculation and results, and trials not being of parallel-group, superiority design. RESULTS: A total of 107 RCTs were included in the study from 102 reports. The most commonly reported method for effect size derivation was a review of evidence and use of previous research (52.3%). This was common across all clinical areas. The median standardised target effect size was 0.30 (interquartile range: 0.20-0.38), with the median standardised observed effect size 0.11 (IQR 0.05-0.29). The maximum anticipated and observed effect sizes were 0.76 and 1.18, respectively. Only two trials had anticipated target values above 0.60. CONCLUSION: The most commonly reported method of elicitation of the target effect size is previous published research. The average target effect size was 0.3. A clear distinction between the target difference and the minimum clinically important difference is recommended when designing a trial. Transparent explanation of target difference elicitation is advised, with multiple methods including a review of evidence and opinion-seeking advised as the more optimal methods for effect size quantification

    Protocol for the development of a CONSORT extension for RCTs using cohorts and routinely collected health data.

    Get PDF
    Background: Randomized controlled trials (RCTs) are often complex and expensive to perform. Less than one third achieve planned recruitment targets, follow-up can be labor-intensive, and many have limited real-world generalizability. Designs for RCTs conducted using cohorts and routinely collected health data, including registries, electronic health records, and administrative databases, have been proposed to address these challenges and are being rapidly adopted. These designs, however, are relatively recent innovations, and published RCT reports often do not describe important aspects of their methodology in a standardized way. Our objective is to extend the Consolidated Standards of Reporting Trials (CONSORT) statement with a consensus-driven reporting guideline for RCTs using cohorts and routinely collected health data. Methods: The development of this CONSORT extension will consist of five phases. Phase 1 (completed) consisted of the project launch, including fundraising, the establishment of a research team, and development of a conceptual framework. In phase 2, a systematic review will be performed to identify publications (1) that describe methods or reporting considerations for RCTs conducted using cohorts and routinely collected health data or (2) that are protocols or report results from such RCTs. An initial "long list" of possible modifications to CONSORT checklist items and possible new items for the reporting guideline will be generated based on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) and The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statements. Additional possible modifications and new items will be identified based on the results of the systematic review. Phase 3 will consist of a three-round Delphi exercise with methods and content experts to evaluate the "long list" and generate a "short list" of key items. In phase 4, these items will serve as the basis for an in-person consensus meeting to finalize a core set of items to be included in the reporting guideline and checklist. Phase 5 will involve drafting the checklist and elaboration-explanation documents, and dissemination and implementation of the guideline. Discussion: Development of this CONSORT extension will contribute to more transparent reporting of RCTs conducted using cohorts and routinely collected health data

    Reasons for non-recruitment of eligible patients to a randomised controlled trial of secondary prevention after intracerebral haemorrhage: observational study.

    Get PDF
    Recruitment to randomised prevention trials is challenging, not least for intracerebral haemorrhage (ICH) associated with antithrombotic drug use. We investigated reasons for not recruiting apparently eligible patients at hospital sites that keep screening logs in the ongoing REstart or STop Antithrombotics Randomised Trial (RESTART), which seeks to determine whether to start antiplatelet drugs after ICH.EDGE project number 14013British Heart Foundation Special Project (SP/12/2/29422) & Project (PG/14/50/30891) fundin

    Dynamic consent: a patient interface for twenty-first century research networks

    Get PDF
    Biomedical research is being transformed through the application of information technologies that allow ever greater amounts of data to be shared on an unprecedented scale. However, the methods for involving participants have not kept pace with changes in research capability. In an era when information is shared digitally at the global level, mechanisms of informed consent remain static, paper-based and organised around national boundaries and legal frameworks. Dynamic consent (DC) is both a specific project and a wider concept that offers a new approach to consent; one designed to meet the needs of the twenty-first century research landscape. At the heart of DC is a personalised, digital communication interface that connects researchers and participants, placing participants at the heart of decision making. The interface facilitates two-way communication to stimulate a more engaged, informed and scientifically literate participant population where individuals can tailor and manage their own consent preferences. The technical architecture of DC includes components that can securely encrypt sensitive data and allow participant consent preferences to travel with their data and samples when they are shared with third parties. In addition to improving transparency and public trust, this system benefits researchers by streamlining recruitment and enabling more efficient participant recontact. DC has mainly been developed in biobanking contexts, but it also has potential application in other domains for a variety of purposes

    Sample size requirements to estimate key design parameters from external pilot randomised controlled trials: a simulation study

    Get PDF
    Background: External pilot or feasibility studies can be used to estimate key unknown parameters to inform the design of the definitive randomised controlled trial (RCT). However, there is little consensus on how large pilot studies need to be, and some suggest inflating estimates to adjust for the lack of precision when planning the definitive RCT. Methods: We use a simulation approach to illustrate the sampling distribution of the standard deviation for continuous outcomes and the event rate for binary outcomes. We present the impact of increasing the pilot sample size on the precision and bias of these estimates, and predicted power under three realistic scenarios. We also illustrate the consequences of using a confidence interval argument to inflate estimates so the required power is achieved with a pre-specified level of confidence. We limit our attention to external pilot and feasibility studies prior to a two-parallel-balanced-group superiority RCT. Results: For normally distributed outcomes, the relative gain in precision of the pooled standard deviation (SDp) is less than 10% (for each five subjects added per group) once the total sample size is 70. For true proportions between 0.1 and 0.5, we find the gain in precision for each five subjects added to the pilot sample is less than 5% once the sample size is 60. Adjusting the required sample sizes for the imprecision in the pilot study estimates can result in excessively large definitive RCTs and also requires a pilot sample size of 60 to 90 for the true effect sizes considered here. Conclusions: We recommend that an external pilot study has at least 70 measured subjects (35 per group) when estimating the SDp for a continuous outcome. If the event rate in an intervention group needs to be estimated by the pilot then a total of 60 to 100 subjects is required. Hence if the primary outcome is binary a total of at least 120 subjects (60 in each group) may be required in the pilot trial. It is very much more efficient to use a larger pilot study, than to guard against the lack of precision by using inflated estimates
    corecore