85 research outputs found

    Familywise error control in multi-armed response-adaptive trials.

    Get PDF
    Response-adaptive designs allow the randomization probabilities to change during the course of a trial based on cumulated response data so that a greater proportion of patients can be allocated to the better performing treatments. A major concern over the use of response-adaptive designs in practice, particularly from a regulatory viewpoint, is controlling the type I error rate. In particular, we show that the naïve z-test can have an inflated type I error rate even after applying a Bonferroni correction. Simulation studies have often been used to demonstrate error control but do not provide a guarantee. In this article, we present adaptive testing procedures for normally distributed outcomes that ensure strong familywise error control by iteratively applying the conditional invariance principle. Our approach can be used for fully sequential and block randomized trials and for a large class of adaptive randomization rules found in the literature. We show there is a high price to pay in terms of power to guarantee familywise error control for randomization schemes with extreme allocation probabilities. However, for proposed Bayesian adaptive randomization schemes in the literature, our adaptive tests maintain or increase the power of the trial compared to the z-test. We illustrate our method using a three-armed trial in primary hypercholesterolemia.DSR and JMSW were funded by the Medical Research Council, grant code MC_UU_00002/6. DSR was also funded by the Biometrika Trust

    Cross-validated risk scores adaptive enrichment (CADEN) design

    Get PDF
    \ua9 2024 The AuthorsWe propose a Cross-validated ADaptive ENrichment design (CADEN) in which a trial population is enriched with a subpopulation of patients who are predicted to benefit from the treatment more than an average patient (the sensitive group). This subpopulation is found using a risk score constructed from the baseline (potentially high-dimensional) information about patients. The design incorporates an early stopping rule for futility. Simulation studies are used to assess the properties of CADEN against the original (non-enrichment) cross-validated risk scores (CVRS) design which constructs a risk score at the end of the trial. We show that when there exists a sensitive group of patients, CADEN achieves a higher power and a reduction in the expected sample size compared to the CVRS design. We illustrate the application of the design in two real clinical trials. We conclude that the new design offers improved statistical efficiency over the existing non-enrichment method, as well as increased benefit to patients. The method has been implemented in an R package caden

    Improving the analysis of composite endpoints in rare disease trials

    Get PDF
    Background: Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. Methods: We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. Results: For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. Conclusions: The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them

    Stepped wedge cluster randomized controlled trial designs: a review of reporting quality and design features

    Get PDF
    Background The stepped wedge (SW) cluster randomized controlled trial (CRCT) design is being used with increasing frequency. However, there is limited published research on the quality of reporting of SW-CRCTs. We address this issue by conducting a literature review. Methods Medline, Ovid, Web of Knowledge, the Cochrane Library, PsycINFO, the ISRCTN registry, and ClinicalTrials.gov were searched to identify investigations employing the SW-CRCT design up to February 2015. For each included completed study, information was extracted on a selection of criteria, based on the CONSORT extension to CRCTs, to assess the quality of reporting. Results A total of 123 studies were included in our review, of which 39 were completed trial reports. The standard of reporting of SW-CRCTs varied in quality. The percentage of trials reporting each criterion varied to as low as 15.4%, with a median of 66.7%. Conclusions There is much room for improvement in the quality of reporting of SW-CRCTs. This is consistent with recent findings for CRCTs. A CONSORT extension for SW-CRCTs is warranted to standardize the reporting of SW-CRCTs.This work was supported by the Wellcome Trust (grant number 099770/Z/12/Z to MJG); the Medical Research Council (grant number MC_UP_1302/2 to APM) and the National Institute for Health Research Cambridge Biomedical Research Centre (MC_UP_1302/4 to JMSW)

    When to keep it simple - adaptive designs are not always useful.

    Get PDF
    Background Adaptive designs are a wide class of methods focused on improving the power, efficiency and participant benefit of clinical trials. They do this through allowing information gathered during the trial to be used to make changes in a statistically robust manner - the changes could include which treatment arms patients are enrolled to (e.g. dropping non-promising treatment arms), the allocation ratios, the target sample size or the enrolment criteria of the trial. Generally, we are enthusiastic about adaptive designs and advocate their use in many clinical situations. However, they are not always advantageous. In some situations, they provide little efficiency advantage or are even detrimental to the quality of information provided by the trial. In our experience, factors that reduce the efficiency of adaptive designs are routinely downplayed or ignored in methodological papers, which may lead researchers into believing they are more beneficial than they actually are.Main text In this paper, we discuss situations where adaptive designs may not be as useful, including situations when the outcomes take a long time to observe, when dropping arms early may cause issues and when increased practical complexity eliminates theoretical efficiency gains.Conclusion Adaptive designs often provide notable efficiency benefits. However, it is important for investigators to be aware that they do not always provide an advantage. There should always be careful consideration of the potential benefits and disadvantages of an adaptive design

    To add or not to add a new treatment arm to a multiarm study: A decision-theoretic framework.

    Get PDF
    Multiarm clinical trials, which compare several experimental treatments against control, are frequently recommended due to their efficiency gain. In practise, all potential treatments may not be ready to be tested in a phase II/III trial at the same time. It has become appealing to allow new treatment arms to be added into on-going clinical trials using a "platform" trial approach. To the best of our knowledge, many aspects of when to add arms to an existing trial have not been explored in the literature. Most works on adding arm(s) assume that a new arm is opened whenever a new treatment becomes available. This strategy may prolong the overall duration of a study or cause reduction in marginal power for each hypothesis if the adaptation is not well accommodated. Within a two-stage trial setting, we propose a decision-theoretic framework to investigate when to add or not to add a new treatment arm based on the observed stage one treatment responses. To account for different prospect of multiarm studies, we define utility in two different ways; one for a trial that aims to maximise the number of rejected hypotheses; the other for a trial that would declare a success when at least one hypothesis is rejected from the study. Our framework shows that it is not always optimal to add a new treatment arm to an existing trial. We illustrate a case study by considering a completed trial on knee osteoarthritis

    Bayesian design and analysis of external pilot trials for complex interventions

    Get PDF
    External pilot trials of complex interventions are used to help determine if and how a confirmatory trial should be undertaken, providing estimates of parameters such as recruitment, retention, and adherence rates. The decision to progress to the confirmatory trial is typically made by comparing these estimates to pre‐specified thresholds known as progression criteria, although the statistical properties of such decision rules are rarely assessed. Such assessment is complicated by several methodological challenges, including the simultaneous evaluation of multiple endpoints, complex multi‐level models, small sample sizes, and uncertainty in nuisance parameters. In response to these challenges, we describe a Bayesian approach to the design and analysis of external pilot trials. We show how progression decisions can be made by minimizing the expected value of a loss function, defined over the whole parameter space to allow for preferences and trade‐offs between multiple parameters to be articulated and used in the decision‐making process. The assessment of preferences is kept feasible by using a piecewise constant parametrization of the loss function, the parameters of which are chosen at the design stage to lead to desirable operating characteristics. We describe a flexible, yet computationally intensive, nested Monte Carlo algorithm for estimating operating characteristics. The method is used to revisit the design of an external pilot trial of a complex intervention designed to increase the physical activity of care home residents

    A Bayesian adaptive design for biomarker trials with linked treatments.

    Get PDF
    BACKGROUND: Response to treatments is highly heterogeneous in cancer. Increased availability of biomarkers and targeted treatments has led to the need for trial designs that efficiently test new treatments in biomarker-stratified patient subgroups. METHODS: We propose a novel Bayesian adaptive randomisation (BAR) design for use in multi-arm phase II trials where biomarkers exist that are potentially predictive of a linked treatment's effect. The design is motivated in part by two phase II trials that are currently in development. The design starts by randomising patients to the control treatment or to experimental treatments that the biomarker profile suggests should be active. At interim analyses, data from treated patients are used to update the allocation probabilities. If the linked treatments are effective, the allocation remains high; if ineffective, the allocation changes over the course of the trial to unlinked treatments that are more effective. RESULTS: Our proposed design has high power to detect treatment effects if the pairings of treatment with biomarker are correct, but also performs well when alternative pairings are true. The design is consistently more powerful than parallel-groups stratified trials. CONCLUSIONS: This BAR design is a powerful approach to use when there are pairings of biomarkers with treatments available for testing simultaneously.This work was supported by the Medical Research Council (grant number G0800860) and the NIHR Cambridge Biomedical Research Centre.This is the final version of the article. It first appeared from NPG via http://dx.doi.org/10.1038/bjc.2015.27
    corecore