41 research outputs found
Comparison of multimarker logistic regression models, with application to a genomewide scan of schizophrenia.
BACKGROUND: Genome-wide association studies (GWAS) are a widely used study design for detecting genetic causes of complex diseases. Current studies provide good coverage of common causal SNPs, but not rare ones. A popular method to detect rare causal variants is haplotype testing. A disadvantage of this approach is that many parameters are estimated simultaneously, which can mean a loss of power and slower fitting to large datasets.Haplotype testing effectively tests both the allele frequencies and the linkage disequilibrium (LD) structure of the data. LD has previously been shown to be mostly attributable to LD between adjacent SNPs. We propose a generalised linear model (GLM) which models the effects of each SNP in a region as well as the statistical interactions between adjacent pairs. This is compared to two other commonly used multimarker GLMs: one with a main-effect parameter for each SNP; one with a parameter for each haplotype. RESULTS: We show the haplotype model has higher power for rare untyped causal SNPs, the main-effects model has higher power for common untyped causal SNPs, and the proposed model generally has power in between the two others. We show that the relative power of the three methods is dependent on the number of marker haplotypes the causal allele is present on, which depends on the age of the mutation. Except in the case of a common causal variant in high LD with markers, all three multimarker models are superior in power to single-SNP tests.Including the adjacent statistical interactions results in lower inflation in test statistics when a realistic level of population stratification is present in a dataset.Using the multimarker models, we analyse data from the Molecular Genetics of Schizophrenia study. The multimarker models find potential associations that are not found by single-SNP tests. However, multimarker models also require stricter control of data quality since biases can have a larger inflationary effect on multimarker test statistics than on single-SNP test statistics. CONCLUSIONS: Analysing a GWAS with multimarker models can yield candidate regions which may contain rare untyped causal variants. This is useful for increasing prior odds of association in future whole-genome sequence analyses.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
Multi-arm multi-stage trials can improve the efficiency of finding effective treatments for stroke: a case study.
BACKGROUND: Many recent Stroke trials fail to show a beneficial effect of the intervention late in the development. Currently a large number of new treatment options are being developed. Multi-arm multi-stage (MAMS) designs offer one potential strategy to avoid lengthy studies of treatments without beneficial effects while at the same time allowing evaluation of several novel treatments. In this paper we provide a review of what MAMS designs are and argue that they are of particular value for Stroke trials. We illustrate this benefit through a case study based on previous published trials of endovascular treatment for acute ischemic stroke. We show in this case study that MAMS trials provide additional power for the same sample size compared to alternative trial designs. This level of additional power depends on the recruitment length of the trial, with most efficiency gained when recruitment is relatively slow. We conclude with a discussion of additional considerations required when starting a MAMS trial. CONCLUSION: MAMS trial designs are potentially very useful for stroke trials due to their improved statistical power compared to the traditional approach.This work was supported in part by grants the National Institute for Health Research (NIHR-SRF-2015-08-001, TJ), the Medical Research Council (SLAH/210 JW)
Optimal design of multi-arm multi-stage trials.
In drug development, there is often uncertainty about the most promising among a set of different treatments. Multi-arm multi-stage (MAMS) trials provide large gains in efficiency over separate randomised trials of each treatment. They allow a shared control group, dropping of ineffective treatments before the end of the trial and stopping the trial early if sufficient evidence of a treatment being superior to control is found. In this paper, we discuss optimal design of MAMS trials. An optimal design has the required type I error rate and power but minimises the expected sample size at some set of treatment effects. Finding an optimal design requires searching over stopping boundaries and sample size, potentially a large number of parameters. We propose a method that combines quick evaluation of specific designs and an efficient stochastic search to find the optimal design parameters. We compare various potential designs motivated by the design of a phase II MAMS trial. We also consider allocating more patients to the control group, as has been carried out in real MAMS studies. We show that the optimal allocation to the control group, although greater than a 1:1 ratio, is smaller than previously advocated and that the gain in efficiency is generally small.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
Optimal design for multi-arm multi-stage clinical trials
RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are.Peer Reviewe
Recommended from our members
A latent variable model for improving inference in trials assessing the effect of dose on toxicity and composite efficacy endpoints.
It is often of interest to explore how dose affects the toxicity and efficacy properties of a novel treatment. In oncology, efficacy is often assessed through response, which is defined by a patient having no new tumour lesions and their tumour size shrinking by 30%. Usually response and toxicity are analysed as binary outcomes in early phase trials. Methods have been proposed to improve the efficiency of analysing response by utilising the continuous tumour size information instead of dichotomising it. However, these methods do not allow for toxicity or for different doses. Motivated by a phase II trial testing multiple doses of a treatment against placebo, we propose a latent variable model that can estimate the probability of response and no toxicity (or other related outcomes) for different doses. We assess the confidence interval coverage and efficiency properties of the method, compared to methods that do not use the continuous tumour size, in a simulation study and the real study. The coverage is close to nominal when model assumptions are met, although can be below nominal when the model is misspecified. Compared to methods that treat response as binary, the method has confidence intervals with 30-50% narrower widths. The method adds considerable efficiency but care must be taken that the model assumptions are reasonable
Controlling type I error rates in multi-arm clinical trials: A case for the false discovery rate.
Multi-arm trials are an efficient way of simultaneously testing several experimental treatments against a shared control group. As well as reducing the sample size required compared to running each trial separately, they have important administrative and logistical advantages. There has been debate over whether multi-arm trials should correct for the fact that multiple null hypotheses are tested within the same experiment. Previous opinions have ranged from no correction is required, to a stringent correction (controlling the probability of making at least one type I error) being needed, with regulators arguing the latter for confirmatory settings. In this article, we propose that controlling the false-discovery rate (FDR) is a suitable compromise, with an appealing interpretation in multi-arm clinical trials. We investigate the properties of the different correction methods in terms of the positive and negative predictive value (respectively how confident we are that a recommended treatment is effective and that a non-recommended treatment is ineffective). The number of arms and proportion of treatments that are truly effective is varied. Controlling the FDR provides good properties. It retains the high positive predictive value of FWER correction in situations where a low proportion of treatments is effective. It also has a good negative predictive value in situations where a high proportion of treatments is effective. In a multi-arm trial testing distinct treatment arms, we recommend that sponsors and trialists consider use of the FDR
Recommended from our members
When to keep it simple - adaptive designs are not always useful.
BACKGROUND: Adaptive designs are a wide class of methods focused on improving the power, efficiency and participant benefit of clinical trials. They do this through allowing information gathered during the trial to be used to make changes in a statistically robust manner - the changes could include which treatment arms patients are enrolled to (e.g. dropping non-promising treatment arms), the allocation ratios, the target sample size or the enrolment criteria of the trial. Generally, we are enthusiastic about adaptive designs and advocate their use in many clinical situations. However, they are not always advantageous. In some situations, they provide little efficiency advantage or are even detrimental to the quality of information provided by the trial. In our experience, factors that reduce the efficiency of adaptive designs are routinely downplayed or ignored in methodological papers, which may lead researchers into believing they are more beneficial than they actually are. MAIN TEXT: In this paper, we discuss situations where adaptive designs may not be as useful, including situations when the outcomes take a long time to observe, when dropping arms early may cause issues and when increased practical complexity eliminates theoretical efficiency gains. CONCLUSION: Adaptive designs often provide notable efficiency benefits. However, it is important for investigators to be aware that they do not always provide an advantage. There should always be careful consideration of the potential benefits and disadvantages of an adaptive design
Graphical approaches for the control of generalised error rates
When simultaneously testing multiple hypotheses, the usual approach in the
context of confirmatory clinical trials is to control the familywise error rate
(FWER), which bounds the probability of making at least one false rejection. In
many trial settings, these hypotheses will additionally have a hierarchical
structure that reflects the relative importance and links between different
clinical objectives. The graphical approach of Bretz et al. (2009) is a
flexible and easily communicable way of controlling the FWER while respecting
complex trial objectives and multiple structured hypotheses. However, the FWER
can be a very stringent criterion that leads to procedures with low power, and
may not be appropriate in exploratory trial settings. This motivates
controlling generalised error rates, particularly when the number of hypotheses
tested is no longer small. We consider the generalised familywise error rate
(k-FWER), which is the probability of making k or more false rejections, as
well as the tail probability of the false discovery proportion (FDP), which is
the probability that the proportion of false rejections is greater than some
threshold. We also consider asymptotic control of the false discovery rate
(FDR), which is the expectation of the FDP. In this paper, we show how to
control these generalised error rates when using the graphical approach and its
extensions. We demonstrate the utility of the resulting graphical procedures on
three clinical trial case studies.Biometrika Trust; Medical Research Council, Grant/Award Numbers:
MC∖UU∖00002/6, MR/N028171/
Group sequential designs for stepped-wedge cluster randomised trials.
BACKGROUND/AIMS: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. METHODS: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. RESULTS: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. CONCLUSION: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial