902 research outputs found
Negations in syllogistic reasoning: Evidence for a heuristic–analytic conflict
An experiment utilizing response time measures was conducted to test dominant processing strategies in syllogistic reasoning with the expanded quantifier set proposed by Roberts (2005). Through adding negations to existing quantifiers it is possible to change problem surface features without altering logical validity. Biases based on surface features such as atmosphere, matching, and the probability heuristics model (PHM; Chater & Oaksford, 1999; Wetherick & Gilhooly, 1995) would not be expected to show variance in response latencies, but participant responses should be highly sensitive to changes in the surface features of the quantifiers. In contrast, according to analytic accounts such as mental models theory and mental logic (e.g., Johnson-Laird & Byrne, 1991; Rips, 1994) participants should exhibit increased response times for negated premises, but not be overly impacted upon by the surface features of the conclusion. Data indicated that the dominant response strategy was based on a matching heuristic, but also provided evidence of a resource-demanding analytic procedure for dealing with double negatives. The authors propose that dual-process theories offer a stronger account of these data whereby participants employ competing heuristic and analytic strategies and fall back on a heuristic response when analytic processing fails
Group sequential designs for stepped-wedge cluster randomised trials.
BACKGROUND/AIMS: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. METHODS: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. RESULTS: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. CONCLUSION: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial
Recommended from our members
Group sequential crossover trial designs with strong control of the familywise error rate.
Crossover designs are an extremely useful tool to investigators, and group sequential methods have proven highly proficient at improving the efficiency of parallel group trials. Yet, group sequential methods and crossover designs have rarely been paired together. One possible explanation for this could be the absence of a formal proof of how to strongly control the familywise error rate in the case when multiple comparisons will be made. Here, we provide this proof, valid for any number of initial experimental treatments and any number of stages, when results are analyzed using a linear mixed model. We then establish formulae for the expected sample size and expected number of observations of such a trial, given any choice of stopping boundaries. Finally, utilizing the four-treatment, four-period TOMADO trial as an example, we demonstrate that group sequential methods in this setting could have reduced the trials expected number of observations under the global null hypothesis by over 33%
Recommended from our members
An optimised multi-arm multi-stage clinical trial design for unknown variance.
Multi-arm multi-stage trial designs can bring notable gains in efficiency to the drug development process. However, for normally distributed endpoints, the determination of a design typically depends on the assumption that the patient variance in response is known. In practice, this will not usually be the case. To allow for unknown variance, previous research explored the performance of t-test statistics, coupled with a quantile substitution procedure for modifying the stopping boundaries, at controlling the familywise error-rate to the nominal level. Here, we discuss an alternative method based on Monte Carlo simulation that allows the group size and stopping boundaries of a multi-arm multi-stage t-test to be optimised, according to some nominated optimality criteria. We consider several examples, provide R code for general implementation, and show that our designs confer a familywise error-rate and power close to the desired level. Consequently, this methodology will provide utility in future multi-arm multi-stage trials
Optimal Bayesian stepped-wedge cluster randomised trial designs for binary outcome data
Under a generalised estimating equation analysis approach, approximate design
theory is used to determine Bayesian D-optimal designs. For two examples,
considering simple exchangeable and exponential decay correlation structures,
we compare the efficiency of identified optimal designs to balanced
stepped-wedge designs and corresponding stepped-wedge designs determined by
optimising using a normal approximation approach. The dependence of the
Bayesian D-optimal designs on the assumed correlation structure is explored;
for the considered settings, smaller decay in the correlation between outcomes
across time periods, along with larger values of the intra-cluster correlation,
leads to designs closer to a balanced design being optimal. Unlike for normal
data, it is shown that the optimal design need not be centro-symmetric in the
binary outcome case. The efficiency of the Bayesian D-optimal design relative
to a balanced design can be large, but situations are demonstrated in which the
advantages are small. Similarly, the optimal design from a normal approximation
approach is often not much less efficient than the Bayesian D-optimal design.
Bayesian D-optimal designs can be readily identified for stepped-wedge cluster
randomised trials with binary outcome data. In certain circumstances,
principally ones with strong time period effects, they will indicate that a
design unlikely to have been identified by previous methods may be
substantially more efficient. However, they require a larger number of
assumptions than existing optimal designs, and in many situations existing
theory under a normal approximation will provide an easier means of identifying
an efficient design for binary outcome data
Stepped wedge cluster randomized controlled trial designs: a review of reporting quality and design features
Abstract
Background
The stepped wedge (SW) cluster randomized controlled trial (CRCT) design is being used with increasing frequency. However, there is limited published research on the quality of reporting of SW-CRCTs. We address this issue by conducting a literature review.
Methods
Medline, Ovid, Web of Knowledge, the Cochrane Library, PsycINFO, the ISRCTN registry, and ClinicalTrials.gov were searched to identify investigations employing the SW-CRCT design up to February 2015. For each included completed study, information was extracted on a selection of criteria, based on the CONSORT extension to CRCTs, to assess the quality of reporting.
Results
A total of 123 studies were included in our review, of which 39 were completed trial reports. The standard of reporting of SW-CRCTs varied in quality. The percentage of trials reporting each criterion varied to as low as 15.4%, with a median of 66.7%.
Conclusions
There is much room for improvement in the quality of reporting of SW-CRCTs. This is consistent with recent findings for CRCTs. A CONSORT extension for SW-CRCTs is warranted to standardize the reporting of SW-CRCTs
Using Physiologically-Based Pharmacokinetic Models to Incorporate Chemical and Non-Chemical Stressors into Cumulative Risk Assessment: A Case Study of Pesticide Exposures
Cumulative risk assessment has been proposed as an approach to evaluate the health risks associated with simultaneous exposure to multiple chemical and non-chemical stressors. Physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) models can allow for the inclusion and evaluation of multiple stressors, including non-chemical stressors, but studies have not leveraged PBPK/PD models to jointly consider these disparate exposures in a cumulative risk context. In this study, we focused on exposures to organophosphate (OP) pesticides for children in urban low-income environments, where these children would be simultaneously exposed to other pesticides (including pyrethroids) and non-chemical stressors that may modify the effects of these exposures (including diet). We developed a methodological framework to evaluate chemical and non-chemical stressor impacts on OPs, utilizing an existing PBPK/PD model for chlorpyrifos. We evaluated population-specific stressors that would influence OP doses or acetylcholinesterase (AChE) inhibition, the relevant PD outcome. We incorporated the impact of simultaneous exposure to pyrethroids and dietary factors on OP dose through the compartments of metabolism and PD outcome within the PBPK model, and simulated combinations of stressors across multiple exposure ranges and potential body weights. Our analyses demonstrated that both chemical and non-chemical stressors can influence the health implications of OP exposures, with up to 5-fold variability in AChE inhibition across combinations of stressor values for a given OP dose. We demonstrate an approach for modeling OP risks in the presence of other population-specific environmental stressors, providing insight about co-exposures and variability factors that most impact OP health risks and contribute to children’s cumulative health risk from pesticides. More generally, this framework can be used to inform cumulative risk assessment for any compound impacted by chemical and non-chemical stressors through metabolism or PD outcomes
Statistical consideration when adding new arms to ongoing clinical trials: the potentials and the caveats
BACKGROUND: Platform trials improve the efficiency of the drug development process through flexible features such as adding and dropping arms as evidence emerges. The benefits and practical challenges of implementing novel trial designs have been discussed widely in the literature, yet less consideration has been given to the statistical implications of adding arms. MAIN: We explain different statistical considerations that arise from allowing new research interventions to be added in for ongoing studies. We present recent methodology development on addressing these issues and illustrate design and analysis approaches that might be enhanced to provide robust inference from platform trials. We also discuss the implication of changing the control arm, how patient eligibility for different arms may complicate the trial design and analysis, and how operational bias may arise when revealing some results of the trials. Lastly, we comment on the appropriateness and the application of platform trials in phase II and phase III settings, as well as publicly versus industry-funded trials. CONCLUSION: Platform trials provide great opportunities for improving the efficiency of evaluating interventions. Although several statistical issues are present, there are a range of methods available that allow robust and efficient design and analysis of these trials
- …