35 research outputs found

    A modified weighted log-rank test for confirmatory trials with a high proportion of treatment switching

    Get PDF
    In confirmatory cancer clinical trials, overall survival (OS) is normally a primary endpoint in the intention-to-treat (ITT) analysis under regulatory standards. After the tumor progresses, it is common that patients allocated to the control group switch to the experimental treatment, or another drug in the same class. Such treatment switching may dilute the relative efficacy of the new drug compared to the control group, leading to lower statistical power. It would be possible to decrease the estimation bias by shortening the follow-up period but this may lead to a loss of information and power. Instead we propose a modified weighted log-rank test (mWLR) that aims at balancing these factors by down-weighting events occurring when many patients have switched treatment. As the weighting should be pre-specified and the impact of treatment switching is unknown, we predict the hazard ratio function and use it to compute the weights of the mWLR. The method may incorporate information from previous trials regarding the potential hazard ratio function over time. We are motivated by the RECORD-1 trial of everolimus against placebo in patients with metastatic renal-cell carcinoma where almost 80\% of the patients in the placebo group received everolimus after disease progression. Extensive simulations show that the new test gives considerably higher efficiency than the standard log-rank test in realistic scenarios

    Optimized adaptive enrichment designs

    Get PDF
    Based on a Bayesian decision theoretic approach, we optimize frequentist single- and adaptive two-stage trial designs for the development of targeted therapies, where in addition to an overall population, a pre-defined subgroup is investigated. In such settings, the losses and gains of decisions can be quantified by utility functions that account for the preferences of different stakeholders. In particular, we optimize expected utilities from the perspectives both of a commercial sponsor, maximizing the net present value, and also of the society, maximizing cost-adjusted expected health benefits of a new treatment for a specific population. We consider single-stage and adaptive two-stage designs with partial enrichment, where the proportion of patients recruited from the subgroup is a design parameter. For the adaptive designs, we use a dynamic programming approach to derive optimal adaptation rules. The proposed designs are compared to trials which are non-enriched (i.e. the proportion of patients in the subgroup corresponds to the prevalence in the underlying population). We show that partial enrichment designs can substantially improve the expected utilities. Furthermore, adaptive partial enrichment designs are more robust than single-stage designs and retain high expected utilities even if the expected utilities are evaluated under a different prior than the one used in the optimization. In addition, we find that trials optimized for the sponsor utility function have smaller sample sizes compared to trials optimized under the societal view and may include the overall population (with patients from the complement of the subgroup) even if there is substantial evidence that the therapy is only effective in the subgroup

    Nonproportional Hazards for Time-to-Event Outcomes in Clinical Trials: JACC Review Topic of the Week.

    Get PDF
    Most major clinical trials in cardiology report time-to-event outcomes using the Cox proportional hazards model so that a treatment effect is estimated as the hazard ratio between groups, accompanied by its 95% confidence interval and a log-rank p value. But nonproportionality of hazards (non-PH) over time occurs quite often, making alternative analysis strategies appropriate. This review presents real examples of cardiology trials with different types of non-PH: an early treatment effect, a late treatment effect, and a diminishing treatment effect. In such scenarios, the relative merits of a Cox model, an accelerated failure time model, a milestone analysis, and restricted mean survival time are examined. Some post hoc analyses for exploring any specific pattern of non-PH are also presented. Recommendations are made, particularly regarding how to handle non-PH in pre-defined Statistical Analysis Plans, trial publications, and regulatory submissions

    On model-based time trend adjustments in platform trials with non-concurrent controls

    Full text link
    Platform trials can evaluate the efficacy of several treatments compared to a control. The number of treatments is not fixed, as arms may be added or removed as the trial progresses. Platform trials are more efficient than independent parallel-group trials because of using shared control groups. For arms entering the trial later, not all patients in the control group are randomised concurrently. The control group is then divided into concurrent and non-concurrent controls. Using non-concurrent controls (NCC) can improve the trial's efficiency, but can introduce bias due to time trends. We focus on a platform trial with two treatment arms and a common control arm. Assuming that the second treatment arm is added later, we assess the robustness of model-based approaches to adjust for time trends when using NCC. We consider approaches where time trends are modeled as linear or as a step function, with steps at times where arms enter or leave the trial. For trials with continuous or binary outcomes, we investigate the type 1 error (t1e) rate and power of testing the efficacy of the newly added arm under a range of scenarios. In addition to scenarios where time trends are equal across arms, we investigate settings with trends that are different or not additive in the model scale. A step function model fitted on data from all arms gives increased power while controlling the t1e, as long as the time trends are equal for the different arms and additive on the model scale. This holds even if the trend's shape deviates from a step function if block randomisation is used. But if trends differ between arms or are not additive on the model scale, t1e control may be lost. The efficiency gained by using step function models to incorporate NCC can outweigh potential biases. However, the specifics of the trial, plausibility of different time trends, and robustness of results should be considere

    A sequential allocation rule

    No full text

    On Sequential Treatment Allocations in Clinical Trials

    No full text
    This dissertation treats baseline-dependent sequential designs of two-treatment parallel-group clinical trials. The treatment assignments are chosen in order to minimize the variance, in a linear model, of the treatment effect. This is done for each new allocation using a generalized biased coin design, or a non-randomized minimization\u27 method. For the minimization\u27 method, the balance process is shown to be tight. It follows that the loss, defined roughly as the number of patients lost d ue to imbalance, is of order N 1 (where N is the trial size). The ANCOVA statistic is used in both parametric and randomization tests when the design is randomized. Deficiency (or second-order efficiency), of design and analysis combin ed, is defined in terms of expected p-value. The asymptotic deficiency of the randomization analysis following a biased coin design is obtained when prognostic factors are ignored. It can be arbitrarily close to zero re lative the balanced t-test when assuming a normal model. Similar results, when prognostic variables are used, are indicated by simulations. As a comparison, the expected loss and asymptotic expected p-value deficiency, relative a balanced parametric test, equal the number of prognostic variables when using independent randomizations. AMS 1991 subject classification: 62L05, 60K30, 62G10, 62P1

    On Sequential Treatment Allocations in Clinical Trials

    No full text
    This dissertation treats baseline-dependent sequential designs of two-treatment parallel-group clinical trials. The treatment assignments are chosen in order to minimize the variance, in a linear model, of the treatment effect. This is done for each new allocation using a generalized biased coin design, or a non-randomized minimization\u27 method. For the minimization\u27 method, the balance process is shown to be tight. It follows that the loss, defined roughly as the number of patients lost d ue to imbalance, is of order N 1 (where N is the trial size). The ANCOVA statistic is used in both parametric and randomization tests when the design is randomized. Deficiency (or second-order efficiency), of design and analysis combin ed, is defined in terms of expected p-value. The asymptotic deficiency of the randomization analysis following a biased coin design is obtained when prognostic factors are ignored. It can be arbitrarily close to zero re lative the balanced t-test when assuming a normal model. Similar results, when prognostic variables are used, are indicated by simulations. As a comparison, the expected loss and asymptotic expected p-value deficiency, relative a balanced parametric test, equal the number of prognostic variables when using independent randomizations. AMS 1991 subject classification: 62L05, 60K30, 62G10, 62P1
    corecore