2,848 research outputs found

    Rerandomization to improve covariate balance in experiments

    Full text link
    Randomized experiments are the "gold standard" for estimating causal effects, yet often in practice, chance imbalances exist in covariate distributions between treatment groups. If covariate data are available before units are exposed to treatments, these chance imbalances can be mitigated by first checking covariate balance before the physical experiment takes place. Provided a precise definition of imbalance has been specified in advance, unbalanced randomizations can be discarded, followed by a rerandomization, and this process can continue until a randomization yielding balance according to the definition is achieved. By improving covariate balance, rerandomization provides more precise and trustworthy estimates of treatment effects.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1008 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Methods for non-proportional hazards in clinical trials: A systematic review

    Full text link
    For the analysis of time-to-event data, frequently used methods such as the log-rank test or the Cox proportional hazards model are based on the proportional hazards assumption, which is often debatable. Although a wide range of parametric and non-parametric methods for non-proportional hazards (NPH) has been proposed, there is no consensus on the best approaches. To close this gap, we conducted a systematic literature search to identify statistical methods and software appropriate under NPH. Our literature search identified 907 abstracts, out of which we included 211 articles, mostly methodological ones. Review articles and applications were less frequently identified. The articles discuss effect measures, effect estimation and regression approaches, hypothesis tests, and sample size calculation approaches, which are often tailored to specific NPH situations. Using a unified notation, we provide an overview of methods available. Furthermore, we derive some guidance from the identified articles. We summarized the contents from the literature review in a concise way in the main text and provide more detailed explanations in the supplement (page 29)

    Independent increments in group sequential tests : a review

    Get PDF
    In order to apply group sequential methods for interim analysis for early stopping in clinical trials, the joint distribution of test statistics over time has to be known. Often the distribution is multivariate normal or asymptotically so, and an application of group sequential methods requires multivariate integration to determine the group sequential boundaries. However, if the increments between successive test statistics are independent, the multivariate integration reduces to a univariate integration involving simple recursion based on convolution. This allows application of standard group sequential methods. In this paper we review group sequential methods and the development that established independent increments in test statistics for the primary outcomes of longitudinal or failure time data

    IMPROVING PRECISION BY ADJUSTING FOR BASELINE VARIABLES IN RANDOMIZED TRIALS WITH BINARY OUTCOMES, WITHOUT REGRESSION MODEL ASSUMPTIONS

    Get PDF
    In randomized clinical trials with baseline variables that are prognostic for the primary outcome, there is potential to improve precision and reduce sample size by appropriately adjusting for these variables. A major challenge is that there are multiple statistical methods to adjust for baseline variables, but little guidance on which is best to use in a given context. The choice of method can have important consequences. For example, one commonly used method leads to uninterpretable estimates if there is any treatment effect heterogeneity, which would jeopardize the validity of trial conclusions. We give practical guidance on how to avoid this problem, while retaining the advantages of covariate adjustment. This can be achieved by using simple (but less well-known) standardization methods from the recent statistics literature. We discuss these methods and give software in R and Stata implementing them. A data example from a recent stroke trial is used to illustrate these methods

    Dose Finding with Escalation with Overdose Control (EWOC) in Cancer Clinical Trials

    Full text link
    Traditionally, the major objective in phase I trials is to identify a working-dose for subsequent studies, whereas the major endpoint in phase II and III trials is treatment efficacy. The dose sought is typically referred to as the maximum tolerated dose (MTD). Several statistical methodologies have been proposed to select the MTD in cancer phase I trials. In this manuscript, we focus on a Bayesian adaptive design, known as escalation with overdose control (EWOC). Several aspects of this design are discussed, including large sample properties of the sequence of doses selected in the trial, choice of prior distributions, and use of covariates. The methodology is exemplified with real-life examples of cancer phase I trials. In particular, we show in the recently completed ABR-217620 (naptumomab estafenatox) trial that omitting an important predictor of toxicity when dose assignments to cancer patients are determined results in a high percent of patients experiencing severe side effects and a significant proportion treated at sub-optimal doses.Comment: Published in at http://dx.doi.org/10.1214/10-STS333 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Thinking outside the box: recent advances in the analysis and presentation of uncertainty in cost-effectiveness studies

    Get PDF
    As many more clinical trials collect economic information within their study design, so health economics analysts are increasingly working with patient-level data on both costs and effects. In this paper, we review recent advances in the use of statistical methods for economic analysis of information collected alongside clinical trials. In particular, we focus on the handling and presentation of uncertainty, including the importance of estimation rather than hypothesis testing, the use of the net-benefit statistic, and the presentation of cost-effectiveness acceptability curves. We also discuss the appropriate sample size calculations for cost-effectiveness analysis at the design stage of a study. Finally, we outline some of the challenges for future research in this areaā€”particularly in relation to the appropriate use of Bayesian methods and methods for analyzing costs that are typically skewed and often incomplete

    Methodology and Application of Adaptive and Sequential Approaches

    Get PDF
    The clinical trial, a prospective study to evaluate the effect of interventions in humans under prespecified conditions, is a standard and integral part of modern medicine. Many adaptive and sequential approaches have been proposed for use in clinical trials to allow adaptations or modifications to aspects of a trial after its initiation without undermining the validity and integrity of the trial. The application of adaptive and sequential methods in clinical trials has significantly improved the flexibility, efficiency, therapeutic effect, and validity of trials. To further advance the performance of clinical trials and convey the progress of research on adaptive and sequential methods in clinical trial design, we review significant research that has explored novel adaptive and sequential approaches and their applications in Phase I, II, and III clinical trials and discuss future directions in this field of research
    • ā€¦
    corecore