53,575 research outputs found
Randomization-Based Confidence Intervals for Cluster Randomized Trials
In a cluster randomized trial (CRT), groups of people are randomly assigned to different interventions. Existing parametric and semiparametric methods for CRTs rely on distributional assumptions or a large number of clusters to maintain nominal confidence interval (CI) coverage. Randomization-based inference is an alternative approach that is distribution-free and does not require a large number of clusters to be valid. Although it is well-known that a CI can be obtained by inverting a randomization test, this requires randomization testing a non-zero null hypothesis, which is challenging with non-continuous and survival outcomes. In this paper, we propose a general method for randomization-based CIs using individual-level data from a CRT. This fast and flexible approach accommodates various outcome types, can account for design features such as matching or stratification, and employs a computationally efficient algorithm. We evaluate this method\u27s performance through simulations and apply it to the Botswana Combination Prevention Project, a large HIV prevention trial with an interval-censored time-to-event outcome
Multi-center clinical trials: Randomization and ancillary statistics
The purpose of this paper is to investigate and develop methods for analysis
of multi-center randomized clinical trials which only rely on the randomization
process as a basis of inference. Our motivation is prompted by the fact that
most current statistical procedures used in the analysis of randomized
multi-center studies are model based. The randomization feature of the trials
is usually ignored. An important characteristic of model based analysis is that
it is straightforward to model covariates. Nevertheless, in nearly all model
based analyses, the effects due to different centers and, in general, the
design of the clinical trials are ignored. An alternative to a model based
analysis is to have analyses guided by the design of the trial. Our development
of design based methods allows the incorporation of centers as well as other
features of the trial design. The methods make use of conditioning on the
ancillary statistics in the sample space generated by the randomization
process. We have investigated the power of the methods and have found that, in
the presence of center variation, there is a significant increase in power. The
methods have been extended to group sequential trials with similar increases in
power.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS151 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Difference-in-differences inference with few treated clusters
Inference using difference-in-differences with clustered data requires care. Previous research has shown that t tests based on a cluster-robust variance estimator (CRVE) severely over-reject when there are few treated clusters, that different variants of the wild cluster bootstrap can over-reject or under-reject severely, and that procedures based on randomization show promise. We demonstrate that randomization inference (RI) procedures based on estimated coefficients, such as the one proposed by Conley and Taber (2011), fail whenever the treated clusters are atypical. We propose an RI procedure based on t statistics which fails only when the treated clusters are atypical and few in number. We also propose a bootstrap-based alternative to randomization inference, which mitigates the discrete nature of RI P values when the number of clusters is small
- …
