59 research outputs found

    Adaptive designs for complex dose finding studies

    Get PDF
    The goal of an early phase clinical trial is to find the regimen (dose, combination, schedule, etc.) satisfying particular toxicity or (and) efficacy characteristics. Designs for trials studying doses of a single cytotoxic drug are based on the fundamental assumption "the more the better", that is, the toxicity and efficacy increase with the dose. This monotonicity assumption can be violated for novel therapies and for more advanced trials studying drug combinations or schedules. It also becomes common to consider a more complex endpoint rather than a binary one as they can carry more information about the drug. Both the violation of the monotonicity assumption and the complex outcomes give rise to important statistical challenges in designing novel clinical trials which require an extensive attention. In the first part of this thesis, we consider a specific class of combination trials which involve novel therapies and can benefit from the monotonicity assumption. We also propose a general tool evaluating the performance of novel designs in the context of complex clinical trials. Further, we consider a problem of Bayesian inference on restricted parameter spaces. We propose novel loss functions for parameters defined on the positive real line and on the interval demonstrating their performances in standard statistical problems. Based on the obtained results, we propose a novel allocation criterion for model-based designs that results in a more ethical allocation of patients. In the second part of this thesis, we consider a more general setting of early phase trials in which an investigator has no (or limited) information about the monotonic orderings of regimens' responses. Using an information-theoretic approach we derive novel regimen selection criteria which allow the avoidance of any parametric or monotonicity assumptions. We propose novel designs based on these criteria and show their consistency. We apply the proposed designs to Phase I, Phase II and Phase I/II clinical trials and compare their performances to currently used model-based methodologies

    Using a Dose-Finding Benchmark to Quantify the Loss Incurred by Dichotomisation in Phase II Dose-Ranging Studies

    Get PDF
    While there is recognition that more informative clinical endpoints can support better decision-making in clinical trials, it remains a common practice to categorise endpoints originally measured on a continuous scale. The primary motivation for this categorisation (and most commonly dichotomisation) is the simplicity of the analysis. There is, however, a long argument that this simplicity can come at a high cost. Specifically, larger sample sizes are needed to achieve the same level of accuracy when using a dichotomised outcome instead of the original continuous endpoint. The degree of “loss of information” has been studied in the contexts of parallel-group designs and two-stage Phase II trials. Limited attention, however, has been given to the quantification of the associated losses in dose ranging trials. In this work, we propose an approach to estimate the associated losses in Phase II dose ranging trials that is free of the actual dose ranging design used and depends on the clinical setting only. The approach uses the notion of a non-parametric optimal benchmark for dose finding trials, an evaluation tool that facilitates the assessment of a dose finding design by providing an upper bound on its performance under a given scenario in terms of the probability of the target dose selection. After demonstrating how the benchmark can be applied to Phase II dose ranging trials, we use it to quantify the dichotomisation losses. Using parameters from real clinical trials in various therapeutic areas, it is found that the ratio of sample sizes needed to obtain the same precision using continuous and binary (dichotomized) endpoints varies between 70%-75% under the majority of scenarios but can drop to 50% in some cases

    Design of platform trials with a change in the control treatment arm

    Full text link
    Platform trials are a more efficient way of testing multiple treatments compared to running separate trials. In this paper we consider platform trials where, if a treatment is found to be superior to the control, it will become the new standard of care (and the control in the platform). The remaining treatments are then tested against this new control. In such a setting, one can either keep the information on both the new standard of care and the other active treatments before the control is changed or one could discard this information when testing for benefit of the remaining treatments. We will show analytically and numerically that retaining the information collected before the change in control can be detrimental to the power of the study. Specifically, we consider the overall power, the probability that the active treatment with the greatest treatment effect is found during the trial. We also consider the conditional power of the active treatments, the probability a given treatment can be found superior against the current control. We prove when, in a multi-arm multi-stage trial where no arms are added, retaining the information is detrimental to both overall and conditional power of the remaining treatments. This loss of power is studied for a motivating example. We then discuss the effect on platform trials in which arms are added later. On the basis of these observations we discuss different aspects to consider when deciding whether to run a continuous platform trial or whether one may be better running a new trial.Comment: 23 pages, 6 figure

    An information-theoretic approach for selecting arms in clinical trials

    Get PDF
    The question of selecting the ‘best’ among different choices is a common problem in statistics. In drug development, our motivating setting, the question becomes, for example, which treatment gives the best response rate. Motivated by recent developments in the theory of context‐dependent information measures, we propose a flexible response‐adaptive experimental design based on a novel criterion governing treatment arm selections which can be used in adaptive experiments with simple (e.g. binary) and complex (e.g. co‐primary, ordinal or nested) end points. It was found that, for specific choices of the context‐dependent measure, the criterion leads to a reliable selection of the correct arm without any parametric or monotonicity assumptions and provides noticeable gains in settings with costly observations. The asymptotic properties of the design are studied for different allocation rules, and the small sample size behaviour is evaluated in simulations in the context of phase II clinical trials with different end points. We compare the proposed design with currently used alternatives and discuss its practical implementation

    Practical recommendations for implementing a Bayesian adaptive phase I design during a pandemic.

    Get PDF
    BACKGROUND: Modern designs for dose-finding studies (e.g., model-based designs such as continual reassessment method) have been shown to substantially improve the ability to determine a suitable dose for efficacy testing when compared to traditional designs such as the 3 + 3 design. However, implementing such designs requires time and specialist knowledge. METHODS: We present a practical approach to developing a model-based design to help support uptake of these methods; in particular, we lay out how to derive the necessary parameters and who should input, and when, to these decisions. Designing a model-based, dose-finding trial is demonstrated using a treatment within the AGILE platform trial, a phase I/II adaptive design for novel COVID-19 treatments. RESULTS: We present discussion of the practical delivery of AGILE, covering what information was found to support principled decision making by the Safety Review Committee, and what could be contained within a statistical analysis plan. We also discuss additional challenges we encountered in the study and discuss more generally what (unplanned) adaptations may be acceptable (or not) in studies using model-based designs. CONCLUSIONS: This example demonstrates both how to design and deliver an adaptive dose-finding trial in order to support uptake of these methods
    • 

    corecore