87 research outputs found
Two-stage phase II oncology designs using short-term endpoints for early stopping
Phase II oncology trials are conducted to evaluate whether the tumour activity of a new treatment is promising enough to warrant further investigation. The most commonly used approach in this context is a two-stage single-arm design with binary endpoint. As for all designs with interim analysis, its efficiency strongly depends on the relation between recruitment rate and follow-up time required to measure the patients’ outcomes. Usually, recruitment is postponed after the sample size of the first stage is achieved up until the outcomes of all patients are available. This may lead to a considerable increase of the trial length and with it to a delay in the drug development process. We propose a design where an intermediate endpoint is used in the interim analysis to decide whether or not the study is continued with a second stage. Optimal and minimax versions of this design are derived. The characteristics of the proposed design in terms of type I error rate, power, maximum and expected sample size as well as trial duration are investigated. Guidance is given on how to select the most appropriate design. Application is illustrated by a phase II oncology trial in patients with advanced angiosarcoma, which motivated this research
Online multiple hypothesis testing for reproducible research
Modern data analysis frequently involves large-scale hypothesis testing,
which naturally gives rise to the problem of maintaining control of a suitable
type I error rate, such as the false discovery rate (FDR). In many biomedical
and technological applications, an additional complexity is that hypotheses are
tested in an online manner, one-by-one over time. However, traditional
procedures that control the FDR, such as the Benjamini-Hochberg procedure,
assume that all p-values are available to be tested at a single time point. To
address these challenges, a new field of methodology has developed over the
past 15 years showing how to control error rates for online multiple hypothesis
testing. In this framework, hypotheses arrive in a stream, and at each time
point the analyst decides whether to reject the current hypothesis based both
on the evidence against it, and on the previous rejection decisions. In this
paper, we present a comprehensive exposition of the literature on online error
rate control, with a review of key theory as well as a focus on applied
examples. We also provide simulation results comparing different online testing
algorithms and an up-to-date overview of the many methodological extensions
that have been proposed.Comment: Updated in response to reviewer comment
Optimal Bayesian stepped-wedge cluster randomised trial designs for binary outcome data
Under a generalised estimating equation analysis approach, approximate design
theory is used to determine Bayesian D-optimal designs. For two examples,
considering simple exchangeable and exponential decay correlation structures,
we compare the efficiency of identified optimal designs to balanced
stepped-wedge designs and corresponding stepped-wedge designs determined by
optimising using a normal approximation approach. The dependence of the
Bayesian D-optimal designs on the assumed correlation structure is explored;
for the considered settings, smaller decay in the correlation between outcomes
across time periods, along with larger values of the intra-cluster correlation,
leads to designs closer to a balanced design being optimal. Unlike for normal
data, it is shown that the optimal design need not be centro-symmetric in the
binary outcome case. The efficiency of the Bayesian D-optimal design relative
to a balanced design can be large, but situations are demonstrated in which the
advantages are small. Similarly, the optimal design from a normal approximation
approach is often not much less efficient than the Bayesian D-optimal design.
Bayesian D-optimal designs can be readily identified for stepped-wedge cluster
randomised trials with binary outcome data. In certain circumstances,
principally ones with strong time period effects, they will indicate that a
design unlikely to have been identified by previous methods may be
substantially more efficient. However, they require a larger number of
assumptions than existing optimal designs, and in many situations existing
theory under a normal approximation will provide an easier means of identifying
an efficient design for binary outcome data
Stepped wedge cluster randomized controlled trial designs: a review of reporting quality and design features
Abstract
Background
The stepped wedge (SW) cluster randomized controlled trial (CRCT) design is being used with increasing frequency. However, there is limited published research on the quality of reporting of SW-CRCTs. We address this issue by conducting a literature review.
Methods
Medline, Ovid, Web of Knowledge, the Cochrane Library, PsycINFO, the ISRCTN registry, and ClinicalTrials.gov were searched to identify investigations employing the SW-CRCT design up to February 2015. For each included completed study, information was extracted on a selection of criteria, based on the CONSORT extension to CRCTs, to assess the quality of reporting.
Results
A total of 123 studies were included in our review, of which 39 were completed trial reports. The standard of reporting of SW-CRCTs varied in quality. The percentage of trials reporting each criterion varied to as low as 15.4%, with a median of 66.7%.
Conclusions
There is much room for improvement in the quality of reporting of SW-CRCTs. This is consistent with recent findings for CRCTs. A CONSORT extension for SW-CRCTs is warranted to standardize the reporting of SW-CRCTs
- …