226 research outputs found

    Using re-randomisation to increase the recruitment rate in clinical trials

    Get PDF
    PhD ThesisRe-randomisation trials allow patients to be re-enrolled and re-randomised for each new treatment episode they experience. For example, in a trial evaluating treatments for acute sickle cell pain crises, patients could be re-randomised each time they have a new pain crisis. However, uptake of this design has been slow, likely because of uncertainty around its validity. The purpose of this thesis is to evaluate the methodological properties of the re-randomisation design. Chapter 2 defines a set of treatment estimands that can be used for rerandomisation trials, and chapters 3 and 4 evaluate the use of independence estimators and mixed-effects models for these estimands. I find that independence estimators are generally unbiased, though can be biased for certain estimands in specific situations. Mixed-effects models are generally biased, except under very strong assumptions. In Chapter 5 I compare re-randomisation with cluster, crossover, and parallel group designs. I find that re-randomisation compares favourably with the other designs, though depending on the specific research question (i.e. estimand of interest), other designs may be more appropriate in certain settings. In chapter 6 I evaluate a set of trials of granulocyte colonystimulating factors for patients with febrile neutropenia which include both parallel group and re-randomisation designs. I found that using re-randomisation led to an increase in recruitment and provided similar results to parallel group trials. In conclusion, the re-randomisation design is a valid design option, and should be used more often

    Eliminating ambiguous treatment effects using estimands

    Get PDF
    Most reported treatment effects in medical research studies are ambiguously defined, which can lead to misinterpretation of study results. This is because most studies do not attempt to describe what the treatment effect represents, and instead require readers to deduce this based on the reported statistical methods. However, this approach is fraught, as many methods provide counterintuitive results. For example, some methods include data from all patients, yet the resulting treatment effect applies only to a subset of patients, whereas other methods will exclude certain patients while results will apply to everyone. Additionally, some analyses provide estimates pertaining to hypothetical settings where patients never die or discontinue treatment. Herein we introduce estimands as a solution to the aforementioned problem. An estimand is a clear description of what the treatment effect represents, thus saving readers the necessity of trying to infer this from study methods and potentially getting it wrong. We provide examples of how estimands can remove ambiguity from reported treatment effects and describe their current use in practice. The crux of our argument is that readers should not have to infer what investigators are estimating; they should be told explicitly

    Estimands in cluster-randomized trials: choosing analyses that answer the right question

    Get PDF
    Background Cluster-randomized trials (CRTs) involve randomizing groups of individuals (e.g. hospitals, schools or villages) to different interventions. Various approaches exist for analysing CRTs but there has been little discussion around the treatment effects (estimands) targeted by each. Methods We describe the different estimands that can be addressed through CRTs and demonstrate how choices between different analytic approaches can impact the interpretation of results by fundamentally changing the question being asked, or, equivalently, the target estimand. Results CRTs can address either the participant-average treatment effect (the average treatment effect across participants) or the cluster-average treatment effect (the average treatment effect across clusters). These two estimands can differ when participant outcomes or the treatment effect depends on the cluster size (referred to as ‘informative cluster size’), which can occur for reasons such as differences in staffing levels or types of participants between small and large clusters. Furthermore, common estimators, such as mixed-effects models or generalized estimating equations with an exchangeable working correlation structure, can produce biased estimates for both the participant-average and cluster-average treatment effects when cluster size is informative. We describe alternative estimators (independence estimating equations and cluster-level analyses) that are unbiased for CRTs even when informative cluster size is present. Conclusion We conclude that careful specification of the estimand at the outset can ensure that the study question being addressed is clear and relevant, and, in turn, that the selected estimator provides an unbiased estimate of the desired quantity

    A simple principal stratum estimator for failure to initiate treatment

    Get PDF
    A common intercurrent event affecting many trials is when some participants do not begin their assigned treatment. For example, in a trial comparing two different methods for fluid delivery during surgery, some participants may have their surgery cancelled. Similarly, in a double-blind drug trial, some participants may not receive any dose of study medication. The commonly used intention-to-treat analysis preserves the randomisation structure, thus protecting against biases from post-randomisation exclusions. However, it estimates a treatment policy effect (i.e. addresses the question "what is the effect of the intervention, regardless of whether the participant actually begins treatment?"), which may not be the most clinically relevant estimand. A principal stratum approach, estimating the treatment effect in the subpopulation of participants who would initiate treatment (regardless of treatment arm), may be a more clinically relevant estimand for many trials. We show that a simple principal stratum estimator based on a "modified intention-to-treat" population, where participants who experience the intercurrent event are excluded, is unbiased for the principal stratum estimand under certain assumptions that are likely to be plausible in many trials, namely that participants who initiate the intervention under one treatment condition would also do so under the other treatment condition. We provide several examples of trials where this assumption is plausible, and several instances where it is not. We conclude that this simple principal stratum estimator can be a useful strategy for handling failure to initiate treatment

    Using modified intention-to-treat as a principal stratum estimator for failure to initiate treatment

    Get PDF
    BACKGROUND: A common intercurrent event affecting many trials is when some participants do not begin their assigned treatment. For example, in a double-blind drug trial, some participants may not receive any dose of study medication. Many trials use a 'modified intention-to-treat' approach, whereby participants who do not initiate treatment are excluded from the analysis. However, it is not clear (a) the estimand being targeted by such an approach and (b) the assumptions necessary for such an approach to be unbiased. METHODS: Using potential outcome notation, we demonstrate that a modified intention-to-treat analysis which excludes participants who do not begin treatment is estimating a principal stratum estimand (i.e. the treatment effect in the subpopulation of participants who would begin treatment, regardless of which arm they were assigned to). The modified intention-to-treat estimator is unbiased for the principal stratum estimand under the assumption that the intercurrent event is not affected by the assigned treatment arm, that is, participants who initiate treatment in one arm would also do so in the other arm (i.e. if someone began the intervention, they would also have begun the control, and vice versa). RESULTS: We identify two key criteria in determining whether the modified intention-to-treat estimator is likely to be unbiased: first, we must be able to measure the participants in each treatment arm who experience the intercurrent event, and second, the assumption that treatment allocation will not affect whether the participant begins treatment must be reasonable. Most double-blind trials will satisfy these criteria, as the decision to start treatment cannot be influenced by the allocation, and we provide an example of an open-label trial where these criteria are likely to be satisfied as well, implying that a modified intention-to-treat analysis which excludes participants who do not begin treatment is an unbiased estimator for the principal stratum effect in these settings. We also give two examples where these criteria will not be satisfied (one comparing an active intervention vs usual care, where we cannot identify which usual care participants would have initiated the active intervention, and another comparing two active interventions in an unblinded manner, where knowledge of the assigned treatment arm may affect the participant's choice to begin or not), implying that a modified intention-to-treat estimator will be biased in these settings. CONCLUSION: A modified intention-to-treat analysis which excludes participants who do not begin treatment can be an unbiased estimator for the principal stratum estimand. Our framework can help identify when the assumptions for unbiasedness are likely to hold, and thus whether modified intention-to-treat is appropriate or not

    Using re-randomisation designs to increase the efficiency and applicability of retention studies within trials: a case study

    Get PDF
    BACKGROUND: Poor retention in randomised trials can lead to serious consequences to their validity. Studies within trials (SWATs) are used to identify the most effective interventions to increase retention. Many interventions could be applied at any follow-up time point, but SWATs commonly assess interventions at a single time point, which can reduce efficiency. METHODS: The re-randomisation design allows participants to be re-enrolled and re-randomised whenever a new retention opportunity occurs (i.e. a new follow-up time point where the intervention could be applied). The main advantages are as follows: (a) it allows the estimation of an average effect across time points, thus increasing generalisability; (b) it can be more efficient than a parallel arm trial due to increased sample size; and (c) it allows subgroup analyses to estimate effectiveness at different time points. We present a case study where the re-randomisation design is used in a SWAT. RESULTS: In our case study, the host trial is a dental trial with two available follow-up points. The Sticker SWAT tests whether adding the trial logo's sticker to the questionnaire's envelope will result in a higher response rate compared with not adding the sticker. The primary outcome is the response rate to postal questionnaires. The re-randomisation design could double the available sample size compared to a parallel arm trial, resulting in the ability to detect an effect size around 28% smaller. CONCLUSION: The re-randomisation design can increase the efficiency and generalisability of SWATs for trials with multiple follow-up time points

    Informative cluster size in cluster-randomised trials: A case study from the TRIGGER trial

    Get PDF
    Background Recent work has shown that cluster-randomised trials can estimate two distinct estimands: the participant-average and cluster-average treatment effects. These can differ when participant outcomes or the treatment effect depends on the cluster size (termed informative cluster size). In this case, estimators that target one estimand (such as the analysis of unweighted cluster-level summaries, which targets the cluster-average effect) may be biased for the other. Furthermore, commonly used estimators such as mixed-effects models or generalised estimating equations with an exchangeable correlation structure can be biased for both estimands. However, there has been little empirical research into whether informative cluster size is likely to occur in practice. Method We re-analysed a cluster-randomised trial comparing two different thresholds for red blood cell transfusion in patients with acute upper gastrointestinal bleeding to explore whether estimates for the participant- and cluster-average effects differed, to provide empirical evidence for whether informative cluster size may be present. For each outcome, we first estimated a participant-average effect using independence estimating equations, which are unbiased under informative cluster size. We then compared this to two further methods: (1) a cluster-average effect estimated using either weighted independence estimating equations or unweighted cluster-level summaries, and (2) estimates from a mixed-effects model or generalised estimating equations with an exchangeable correlation structure. We then performed a small simulation study to evaluate whether observed differences between cluster- and participant-average estimates were likely to occur even if no informative cluster size was present. Results For most outcomes, treatment effect estimates from different methods were similar. However, differences of >10% occurred between participant- and cluster-average estimates for 5 of 17 outcomes (29%). We also observed several notable differences between estimates from mixed-effects models or generalised estimating equations with an exchangeable correlation structure and those based on independence estimating equations. For example, for the EQ-5D VAS score, the independence estimating equation estimate of the participant-average difference was 4.15 (95% confidence interval: −3.37 to 11.66), compared with 2.84 (95% confidence interval: −7.37 to 13.04) for the cluster-average independence estimating equation estimate, and 3.23 (95% confidence interval: −6.70 to 13.16) from a mixed-effects model. Similarly, for thromboembolic/ischaemic events, the independence estimating equation estimate for the participant-average odds ratio was 0.43 (95% confidence interval: 0.07 to 2.48), compared with 0.33 (95% confidence interval: 0.06 to 1.77) from the cluster-average estimator. Conclusion In this re-analysis, we found that estimates from the various approaches could differ, which may be due to the presence of informative cluster size. Careful consideration of the estimand and the plausibility of assumptions underpinning each estimator can help ensure an appropriate analysis methods are used. Independence estimating equations and the analysis of cluster-level summaries (with appropriate weighting for each to correspond to either the participant-average or cluster-average treatment effect) are a desirable choice when informative cluster size is deemed possible, due to their unbiasedness in this setting

    A four-step strategy for handling missing outcome data in randomised trials affected by a pandemic.

    Get PDF
    BACKGROUND: The coronavirus pandemic (Covid-19) presents a variety of challenges for ongoing clinical trials, including an inevitably higher rate of missing outcome data, with new and non-standard reasons for missingness. International drug trial guidelines recommend trialists review plans for handling missing data in the conduct and statistical analysis, but clear recommendations are lacking. METHODS: We present a four-step strategy for handling missing outcome data in the analysis of randomised trials that are ongoing during a pandemic. We consider handling missing data arising due to (i) participant infection, (ii) treatment disruptions and (iii) loss to follow-up. We consider both settings where treatment effects for a 'pandemic-free world' and 'world including a pandemic' are of interest. RESULTS: In any trial, investigators should; (1) Clarify the treatment estimand of interest with respect to the occurrence of the pandemic; (2) Establish what data are missing for the chosen estimand; (3) Perform primary analysis under the most plausible missing data assumptions followed by; (4) Sensitivity analysis under alternative plausible assumptions. To obtain an estimate of the treatment effect in a 'pandemic-free world', participant data that are clinically affected by the pandemic (directly due to infection or indirectly via treatment disruptions) are not relevant and can be set to missing. For primary analysis, a missing-at-random assumption that conditions on all observed data that are expected to be associated with both the outcome and missingness may be most plausible. For the treatment effect in the 'world including a pandemic', all participant data is relevant and should be included in the analysis. For primary analysis, a missing-at-random assumption - potentially incorporating a pandemic time-period indicator and participant infection status - or a missing-not-at-random assumption with a poorer response may be most relevant, depending on the setting. In all scenarios, sensitivity analysis under credible missing-not-at-random assumptions should be used to evaluate the robustness of results. We highlight controlled multiple imputation as an accessible tool for conducting sensitivity analyses. CONCLUSIONS: Missing data problems will be exacerbated for trials active during the Covid-19 pandemic. This four-step strategy will facilitate clear thinking about the appropriate analysis for relevant questions of interest
    • …
    corecore