14 research outputs found

    On Multi-Armed Bandit Designs for Dose-Finding Clinical Trials

    Get PDF
    We study the problem of finding the optimal dosage in early stage clinical trials through the multi-armed bandit lens. We advocate the use of the Thompson Sampling principle, a flexible algorithm that can accommodate different types of monotonicity assumptions on the toxicity and efficacy of the doses. For the simplest version of Thompson Sampling, based on a uniform prior distribution for each dose, we provide finite-time upper bounds on the number of sub-optimal dose selections, which is unprecedented for dose-finding algorithms. Through a large simulation study, we then show that variants of Thompson Sampling based on more sophisticated prior distributions outperform state-of-the-art dose identification algorithms in different types of dose-finding studies that occur in phase I or phase I/II trials

    A Bayesian dose-finding design for drug combination clinical trials based on the logistic model

    Get PDF
    International audienceIn early phase dose-finding cancer studies, the objective is to determine the maximum tolerated dose, defined as the highest dose with an acceptable dose-limiting toxicity rate. Finding this dose for drug-combination trials is complicated because of drug–drug interactions, and many trial designs have been proposed to address this issue. These designs rely on complicated statistical models that typically are not familiar to clinicians, and are rarely used in practice. The aim of this paper is to propose a Bayesian dose-finding design for drug combination trials based on standard logistic regression. Under the proposed design, we continuously update the posterior estimates of the model parameters to make the decisions of dose assignment and early stopping. Simulation studies show that the proposed design is competitive and outperforms some existing designs. We also extend our design to handle delayed toxicities. Copyright © 2014 John Wiley & Sons, Ltd

    On Multi-Armed Bandit Designs for Dose-Finding Trials

    Get PDF
    International audienceWe study the problem of finding the optimal dosage in early stage clinical trials through the multi-armed bandit lens. We advocate the use of the Thompson Sampling principle, a flexible algorithm that can accommodate different types of monotonicity assumptions on the toxicity and efficacy of the doses. For the simplest version of Thompson Sampling, based on a uniform prior distribution for each dose, we provide finite-time upper bounds on the number of sub-optimal dose selections, which is unprecedented for dose-finding algorithms. Through a large simulation study, we then show that variants of Thompson Sampling based on more sophisticated prior distributions outperform state-of-the-art dose identification algorithms in different types of dose-finding studies that occur in phase I or phase I/II trials

    On Multi-Armed Bandit Designs for Dose-Finding Trials

    No full text
    International audienceWe study the problem of finding the optimal dosage in early stage clinical trials through the multi-armed bandit lens. We advocate the use of the Thompson Sampling principle, a flexible algorithm that can accommodate different types of monotonicity assumptions on the toxicity and efficacy of the doses. For the simplest version of Thompson Sampling, based on a uniform prior distribution for each dose, we provide finite-time upper bounds on the number of sub-optimal dose selections, which is unprecedented for dose-finding algorithms. Through a large simulation study, we then show that variants of Thompson Sampling based on more sophisticated prior distributions outperform state-of-the-art dose identification algorithms in different types of dose-finding studies that occur in phase I or phase I/II trials

    An MCMC method for the evaluation of the Fisher information matrix for non-linear mixed effect models

    No full text
    International audienceNonlinear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization towards more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order linearization (FO) to calculate the FIM. Although efficient in general, FO cannot be applied to complex nonlinear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. 1 We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs

    dfcomb: An R-package for phase I/II trials of drug combinations

    Get PDF
    International audienceIn this paper, we present the dfcomb R package for the implementation of a single prospective clinical trial or simulation studies of phase I combination trials in oncology. The aim is to present the features of the package and to illustrate how to use it in practice though different examples. The use of combination clinical trials is growing, but the implementation of existing model-based methods is complex, so this package should promote the use of innovative adaptive designs for early phases combination trials

    Robust designs in longitudinal studies accounting for parameter and model uncertainties -Application to count data

    No full text
    International audienceNonlinear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal designs based on the expected Fisher information matrix (FIM) can be used. A method evaluating the FIM using Monte-Carlo Hamiltonian Monte-Carlo (MC-HMC) has been proposed and implemented in the R package MIXFIM using Stan. This approach, however, requires a priori knowledge of models and parameters, which leads to locally optimal designs. The objective of this work was to extend this MC-HMC-based method to evaluate the FIM in NLMEMs accounting for uncertainty in parameters and in models. When introducing uncertainty in the population parameters, we evaluated the robust FIM as the expectation of the FIM computed by MC-HMC over the distribution of these parameters. Then, the compound D-optimality criterion (CD optimality), corresponding to a weighted product of the D-optimality criteria of several candidate models, was used to find a common CD-optimal design for the set of candidate models. Finally, a compound DE-criterion (CDE optimality), corresponding to a weighted product of the normalized determinants of the robust FIMs of all the candidate models accounting for uncertainty inparameters, was calculated to find the CDE-optimal design which was robust on both parameters and model. These methods were applied in a longitudinal Poisson count model. We assumed prior distributions on the population parameters as well as several candidate models describing the relationship between the logarithm of the event rate parameter and the dose. We found that assuming uncertainty in parameters could lead to different optimal designs, and misspecification of models could induce designs with low efficiencies. The CD- or CDE-optimal designs therefore provided a good compromise for different candidate models. Finally, the proposed approach allows for the first time optimization of designs for repeated discrete data accounting for parameter and model uncertainties

    Bayesian modeling of a bivariate toxicity outcome for early phase oncology trials evaluating dose regimens

    No full text
    International audienceBayesian joint modeling, bivariate toxicity, cumulative probability of toxicity, dose regimen, early phase oncology, pharmacokinetics/pharmacodynamics 1 INTRODUCTION Most phase I dose-finding trials in oncology aim to determine the maximum tolerated dose (MTD), which is defined as the highest dose that does not exceed a predefined probability of dose-limiting toxicity (DLT), in a prespecified observational window. The DLT is a binary outcome defined to summarize the patient's toxicity profile and is usually derived from Moreno Ursino and Marie-Karelle Riviere contributed equally to this study. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made

    Phase I/II dose-finding design for molecularly targeted agent: Plateau determination using adaptive randomization

    Get PDF
    International audienceConventionally, phase I dose-finding trials aim to determine the maximum tolerated dose of a new drug under the assumption that both toxicity and efficacy monotonically increase with the dose. This paradigm, however, is not suitable for some molecularly targeted agents, such as monoclonal antibodies, for which efficacy often increases initially with the dose and then plateaus. For molecularly targeted agents, the goal is to find the optimal dose, defined as the lowest safe dose that achieves the highest efficacy. We develop a Bayesian phase I/II dose-finding design to find the optimal dose. We employ a logistic model with a plateau parameter to capture the increasing-then-plateau feature of the dose–efficacy relationship. We take the weighted likelihood approach to accommodate for the case where efficacy is possibly late-onset. Based on observed data, we continuously update the posterior estimates of toxicity and efficacy probabilities and adaptively assign patients to the optimal dose. The simulation studies show that the proposed design has good operating characteristics. This method is going to be applied in more than two phase I clinical trials as no other method is available for this specific setting. We also provide an R package dfmta that can be downloaded from CRAN website
    corecore