167,344 research outputs found

    On the sample mean after a group sequential trial

    Full text link
    A popular setting in medical statistics is a group sequential trial with independent and identically distributed normal outcomes, in which interim analyses of the sum of the outcomes are performed. Based on a prescribed stopping rule, one decides after each interim analysis whether the trial is stopped or continued. Consequently, the actual length of the study is a random variable. It is reported in the literature that the interim analyses may cause bias if one uses the ordinary sample mean to estimate the location parameter. For a generic stopping rule, which contains many classical stopping rules as a special case, explicit formulas for the expected length of the trial, the bias, and the mean squared error (MSE) are provided. It is deduced that, for a fixed number of interim analyses, the bias and the MSE converge to zero if the first interim analysis is performed not too early. In addition, optimal rates for this convergence are provided. Furthermore, under a regularity condition, asymptotic normality in total variation distance for the sample mean is established. A conclusion for naive confidence intervals based on the sample mean is derived. It is also shown how the developed theory naturally fits in the broader framework of likelihood theory in a group sequential trial setting. A simulation study underpins the theoretical findings.Comment: 52 pages (supplementary data file included

    An Evaluation of Inferential Procedures for Adaptive Clinical Trial Designs with Pre-specified Rules for Modifying the Sample Size

    Get PDF
    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio test statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature

    Adaptive methodologies in multi-arm dose response and biosimilarity clinical trials

    Full text link
    As most adaptive clinical trial designs are implemented in stages, well-understood methods of sequential trial monitoring are needed. In the frequentist paradigm, examples of sequential monitoring methodologies include the p-value combination tests, conditional error, conditional power, and alpha spending approaches. Within the Bayesian framework, posterior and predictive probabilities are used as monitoring criteria, with the latter being analogous to the conditional power approach. In a placebo or active-contolled dose response clinical trial, we are interested in achieving two objectives: selecting the best therapeutic dose and confirming this selected dose. Traditional approach uses the parallel group design with Dunnett's adjustment. Recently, some two- stage Seamless II/III designs have been proposed. The drop-the-losers design considers selecting the dose with the highest empirical mean after the first stage, while another design assumes a dose-response model to aid dose selection. These designs however do not consider prioritizing the doses and adaptively inserting new doses. We propose an adaptive staggered dose design for a normal endpoint that makes minimal assumption regarding the dose response and sequentially adds doses to the trial. An alpha spending function is applied in a novel way to monitor the doses across the trial. Through numerical and simulation studies, we confirm that optimistic alpha spending coupled with informative dose ordering jointly produce some desirable operating characteristics when compared to drop-the-losers and model-based Seamless designs. In addition, we show how the design parameters can be flexibly varied to further improve its performance and how it can be extended to binary and survival endpoints. In a biosimilarity trial, we are interested in establishing evidence of comparable efficacy between a follow-on biological product and a reference innovator product. So far, no standard method for biosimilarity has been endorsed by regulatory agency. We propose a Bayesian hierarchical bias model and a non-inferiority hypothesis framework to prove biosimilarity. A two-stage adaptive design using predictive probability as early stopping criterion is pro- posed. Through simulation study, the proposed design controls the type I error better than the frequentist approach and Bayesian power is superior when biosimilarity is plausible. Two-stage design further reduces the expected sample size

    Comparison of Bayesian and frequentist group-sequential clinical trial designs

    Get PDF
    Background: There is a growing interest in the use of Bayesian adaptive designs in late-phase clinical trials. This includes the use of stopping rules based on Bayesian analyses in which the frequentist type I error rate is controlled as in frequentist group-sequential designs. Methods: This paper presents a practical comparison of Bayesian and frequentist group-sequential tests. Focussing on the setting in which data can be summarised by normally distributed test statistics, we evaluate and compare boundary values and operating characteristics. Results: Although Bayesian and frequentist group-sequential approaches are based on fundamentally different paradigms, in a single arm trial or two-arm comparative trial with a prior distribution specified for the treatment difference, Bayesian and frequentist group-sequential tests can have identical stopping rules if particular critical values with which the posterior probability is compared or particular spending function values are chosen. If the Bayesian critical values at different looks are restricted to be equal, O’Brien and Fleming’s design corresponds to a Bayesian design with an exceptionally informative negative prior, Pocock’s design to a Bayesian design with a non-informative prior and frequentist designs with a linear alpha spending function are very similar to Bayesian designs with slightly informative priors. This contrasts with the setting of a comparative trial with independent prior distributions specified for treatment effects in different groups. In this case Bayesian and frequentist group-sequential tests cannot have the same stopping rule as the Bayesian stopping rule depends on the observed means in the two groups and not just on their difference. In this setting the Bayesian test can only be guaranteed to control the type I error for a specified range of values of the control group treatment effect. Conclusions: Comparison of frequentist and Bayesian designs can encourage careful thought about design parameters and help to ensure appropriate design choices are made

    Bayesian clinical trial designs : Another option for trauma trials?

    Get PDF
    The UK-REBOA Trial is funded by the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) programme (project number 14/199/09). PP was supported by the MRC Network of Hubs for Trials Methodology Research (MR/L004933/1-R/N/P/B1).Peer reviewedPublisher PD

    Dissociating Explicit and Implicit Timing in Parkinson\u2019s Disease Patients: Evidence from Bisection and Foreperiod Tasks

    Get PDF
    A consistent body of literature reported that Parkinson\u2019s disease (PD) is marked by severe deficits in temporal processing. However, the exact nature of timing problems in PD patients is still elusive. In particular, what remains unclear is whether the temporal dysfunction observed in PD patients regards explicit and/or implicit timing. Explicit timing tasks require participants to attend to the duration of the stimulus, whereas in implicit timing tasks no explicit instruction to process time is received but time still affects performance. In the present study, we investigated temporal ability in PD by comparing 20 PD participants and 20 control participants in both explicit and implicit timing tasks. Specifically, we used a time bisection task to investigate explicit timing and a foreperiod task for implicit timing. Moreover, this is the first study investigating sequential effects in PD participants. Results showed preserved temporal ability in PD participants in the implicit timing task only (i.e., normal foreperiod and sequential effects). By contrast, PD participants failed in the explicit timing task as they displayed shorter perceived durations and higher variability compared to controls. Overall, the dissociation reported here supports the idea that timing can be differentiated according to whether it is explicitly or implicitly processed, and that PD participants are selectively impaired in the explicit processing of time
    • …
    corecore