89 research outputs found
A simulation study for comparing testing statistics in response-adaptive randomization
<p>Abstract</p> <p>Background</p> <p>Response-adaptive randomizations are able to assign more patients in a comparative clinical trial to the tentatively better treatment. However, due to the adaptation in patient allocation, the samples to be compared are no longer independent. At large sample sizes, many asymptotic properties of test statistics derived for independent sample comparison are still applicable in adaptive randomization provided that the patient allocation ratio converges to an appropriate target asymptotically. However, the small sample properties of commonly used test statistics in response-adaptive randomization are not fully studied.</p> <p>Methods</p> <p>Simulations are systematically conducted to characterize the statistical properties of eight test statistics in six response-adaptive randomization methods at six allocation targets with sample sizes ranging from 20 to 200. Since adaptive randomization is usually not recommended for sample size less than 30, the present paper focuses on the case with a sample of 30 to give general recommendations with regard to test statistics for contingency tables in response-adaptive randomization at small sample sizes.</p> <p>Results</p> <p>Among all asymptotic test statistics, the Cook's correction to chi-square test (<it>T</it><sub><it>MC</it></sub>) is the best in attaining the nominal size of hypothesis test. The William's correction to log-likelihood ratio test (<it>T</it><sub><it>ML</it></sub>) gives slightly inflated type I error and higher power as compared with <it>T</it><sub><it>MC</it></sub>, but it is more robust against the unbalance in patient allocation. <it>T</it><sub><it>MC </it></sub>and <it>T</it><sub><it>ML </it></sub>are usually the two test statistics with the highest power in different simulation scenarios. When focusing on <it>T</it><sub><it>MC </it></sub>and <it>T</it><sub><it>ML</it></sub>, the generalized drop-the-loser urn (GDL) and sequential estimation-adjusted urn (SEU) have the best ability to attain the correct size of hypothesis test respectively. Among all sequential methods that can target different allocation ratios, GDL has the lowest variation and the highest overall power at all allocation ratios. The performance of different adaptive randomization methods and test statistics also depends on allocation targets. At the limiting allocation ratio of drop-the-loser (DL) and randomized play-the-winner (RPW) urn, DL outperforms all other methods including GDL. When comparing the power of test statistics in the same randomization method but at different allocation targets, the powers of log-likelihood-ratio, log-relative-risk, log-odds-ratio, Wald-type Z, and chi-square test statistics are maximized at their corresponding optimal allocation ratios for power. Except for the optimal allocation target for log-relative-risk, the other four optimal targets could assign more patients to the worse arm in some simulation scenarios. Another optimal allocation target, <it>R</it><sub><it>RSIHR</it></sub>, proposed by Rosenberger and Sriram (<it>Journal of Statistical Planning and Inference</it>, 1997) is aimed at minimizing the number of failures at fixed power using Wald-type Z test statistics. Among allocation ratios that always assign more patients to the better treatment, <it>R</it><sub><it>RSIHR </it></sub>usually has less variation in patient allocation, and the values of variation are consistent across all simulation scenarios. Additionally, the patient allocation at <it>R</it><sub><it>RSIHR </it></sub>is not too extreme. Therefore, <it>R</it><sub><it>RSIHR </it></sub>provides a good balance between assigning more patients to the better treatment and maintaining the overall power.</p> <p>Conclusion</p> <p>The Cook's correction to chi-square test and Williams' correction to log-likelihood-ratio test are generally recommended for hypothesis test in response-adaptive randomization, especially when sample sizes are small. The generalized drop-the-loser urn design is the recommended method for its good overall properties. Also recommended is the use of the <it>R</it><sub><it>RSIHR </it></sub>allocation target.</p
Meta-analysis of continuous outcomes: using pseudo IPD created from aggregate data to adjust for baseline imbalance and assess treatment-by-baseline modification.
Meta-analysis of individual participant data (IPD) is considered the "gold-standard" for synthesizing clinical study evidence. However, gaining access to IPD can be a laborious task (if possible at all) and in practice only summary (aggregate) data are commonly available. In this work we focus on meta-analytic approaches of comparative studies where aggregate data are available for continuous outcomes measured at baseline (pre-treatment) and follow-up (post-treatment). We propose a method for constructing pseudo individual baselines and outcomes based on the aggregate data. These pseudo IPD can be subsequently analysed using standard analysis of covariance (ANCOVA) methods. Pseudo IPD for continuous outcomes reported at two timepoints can be generated using the sufficient statistics of an ANCOVA model i.e., the mean and standard deviation at baseline and follow-up per group, together with the correlation of the baseline and follow-up measurements. Applying the ANCOVA approach, which crucially adjusts for baseline imbalances and accounts for the correlation between baseline and change scores, to the pseudo IPD results in identical estimates to the ones obtained by an ANCOVA on the true IPD. In addition, an interaction term between baseline and treatment effect can be added. There are several modelling options available under this approach, which makes it very flexible. Methods are exemplified using reported data of a previously published IPD metaanalysis of 10 trials investigating the effect of antihypertensive treatments on systolic blood pressure, leading to identical results compared with the true IPD analysis and of a meta-analysis of fewer trials, where baseline imbalance occurred. This article is protected by copyright. All rights reserved
A re-randomisation design for clinical trials
Background: Recruitment to clinical trials is often problematic, with many trials failing to recruit to their target sample size. As a result, patient care may be based on suboptimal evidence from underpowered trials or non-randomised studies. Methods: For many conditions patients will require treatment on several occasions, for example, to treat symptoms of an underlying chronic condition (such as migraines, where treatment is required each time a new episode occurs), or until they achieve treatment success (such as fertility, where patients undergo treatment on multiple occasions until they become pregnant). We describe a re-randomisation design for these scenarios, which allows each patient to be independently randomised on multiple occasions. We discuss the circumstances in which this design can be used. Results: The re-randomisation design will give asymptotically unbiased estimates of treatment effect and correct type I error rates under the following conditions: (a) patients are only re-randomised after the follow-up period from their previous randomisation is complete; (b) randomisations for the same patient are performed independently; and (c) the treatment effect is constant across all randomisations. Provided the analysis accounts for correlation between observations from the same patient, this design will typically have higher power than a parallel group trial with an equivalent number of observations. Conclusions: If used appropriately, the re-randomisation design can increase the recruitment rate for clinical trials while still providing an unbiased estimate of treatment effect and correct type I error rates. In many situations, it can increase the power compared to a parallel group design with an equivalent number of observations
Risk of selection bias in randomised trials
Background: Selection bias occurs when recruiters selectively enrol patients into the trial based on what the next treatment allocation is likely to be. This can occur even if appropriate allocation concealment is used if recruiters can guess the next treatment assignment with some degree of accuracy. This typically occurs in unblinded trials when restricted randomisation is implemented to force the number of patients in each arm or within each centre to be the same. Several methods to reduce the risk of selection bias have been suggested; however, it is unclear how often these techniques are used in practice. Methods: We performed a review of published trials which were not blinded to assess whether they utilised methods for reducing the risk of selection bias. We assessed the following techniques: (a) blinding of recruiters; (b) use of simple randomisation; (c) avoidance of stratification by site when restricted randomisation is used; (d) avoidance of permuted blocks if stratification by site is used; and (e) incorporation of prognostic covariates into the randomisation procedure when restricted randomisation is used. We included parallel group, individually randomised phase III trials published in four general medical journals (BMJ, Journal of the American Medical Association, The Lancet, and New England Journal of Medicine) in 2010. Results: We identified 152 eligible trials. Most trials (98%) provided no information on whether recruiters were blind to previous treatment allocations. Only 3% of trials used simple randomisation; 63% used some form of restricted randomisation, and 35% did not state the method of randomisation. Overall, 44% of trials were stratified by site of recruitment; 27% were not, and 29% did not report this information. Most trials that did stratify by site of recruitment used permuted blocks (58%), and only 15% reported using random block sizes. Many trials that used restricted randomisation also included prognostic covariates in the randomisation procedure (56%). Conclusions: The risk of selection bias could not be ascertained for most trials due to poor reporting. Many trials which did provide details on the randomisation procedure were at risk of selection bias due to a poorly chosen randomisation methods. Techniques to reduce the risk of selection bias should be more widely implemented
Adjusting for multiple prognostic factors in the analysis of randomised trials
Background: When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method.
Methods: We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome.
Results: Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power.
Conclusions: It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme
scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large
sample sizes, however treating strata as random effects should be the analysis method of choice with binary or
time-to-event outcomes and a small sample size
Effectiveness evaluation of an integrated automatic termomechanic massage system (SMATH® system) in non-specific sub-acute and chronic low back pain - a randomized double-blinded controlled trial, comparing SMATH therapy versus sham therapy: study protocol for a randomized controlled trial
<p>Abstract</p> <p>Background</p> <p>Low back pain (LBP) is a major health problem in modern society, with 70-85% of the population experiencing LBP at some time in their lives. Each year, 5-10% of the workforce misses work due to LBP, most for less than 7 days. Almost 10% of all patients are at risk of developing chronic pain and disability. Little clinical evidence is available for the majority of treatments used in LBP therapy. However, moderate evidence exists for interdisciplinary rehabilitation, exercise, acupuncture, spinal manipulation, and cognitive behavioral therapy for subacute and chronic LBP. The SMATH<sup>® </sup>system (system for automatic thermomechanic massage in health) is a new medical device (MD) that combines basic principles of mechanical massage, thermotherapy, acupressure, infrared therapy, and moxibustion. SMATH<sup>® </sup>is suitable for automatic multidisciplinary treatment on patients with non-specific sub-acute and chronic LBP.</p> <p>Methods/design</p> <p>This paper describes the protocol for a double-blinded, sham-controlled, randomized, single-center short term clinical trial in patients with non-specific sub-acute and chronic LBP aged 18 to 70 years. The primary outcome will be the effectiveness of SMATH<sup>® </sup>versus sham therapy (medical device without active principles) determined by evaluating self perceived physical function with Roland Morris Disability Questionnaire (RMDQ) scores after 4 weeks of treatment (end of treatment). Major secondary outcome will be effectiveness of SMATH<sup>® </sup>determined by evaluating self perceived physical function comparing RMDQ scores between end of treatment and baseline. The trial part of the study will take 7 months while observational follow-up will take 11 months. The sample size will be 72 participants (36 for each arm). The project has been approved by the Ethical Committee of Cremona Hospital, Italy on 29 November 2010.</p> <p>Discussion</p> <p>Compared to other medical specialties, physical and rehabilitation medicine (PRM) has not yet received the deserved recognition from clinicians and researchers in the scientific community, especially for medical devices. The best way to change this disadvantage is through well-conducted clinical research in sham-controlled randomized trials. Sham treatment groups are essential for improving the level of evidence-based practice in PRM. The present trial will counter the general lack of evidence concerning medical devices used in LBP therapy.</p> <p>Trial Registration</p> <p>ISRCTN: <a href="http://www.controlled-trials.com/ISRCTN08714168">ISRCTN08714168</a></p
A comparison of methods to adjust for continuous covariates in the analysis of randomised trials
BACKGROUND: Although covariate adjustment in the analysis of randomised trials can be beneficial, adjustment for continuous covariates is complicated by the fact that the association between covariate and outcome must be specified. Misspecification of this association can lead to reduced power, and potentially incorrect conclusions regarding treatment efficacy. METHODS: We compared several methods of adjustment to determine which is best when the association between covariate and outcome is unknown. We assessed (a) dichotomisation or categorisation; (b) assuming a linear association with outcome; (c) using fractional polynomials with one (FP1) or two (FP2) polynomial terms; and (d) using restricted cubic splines with 3 or 5 knots. We evaluated each method using simulation and through a re-analysis of trial datasets. RESULTS: Methods which kept covariates as continuous typically had higher power than methods which used categorisation. Dichotomisation, categorisation, and assuming a linear association all led to large reductions in power when the true association was non-linear. FP2 models and restricted cubic splines with 3 or 5 knots performed best overall. CONCLUSIONS: For the analysis of randomised trials we recommend (1) adjusting for continuous covariates even if their association with outcome is unknown; (2) keeping covariates as continuous; and (3) using fractional polynomials with two polynomial terms or restricted cubic splines with 3 to 5 knots when a linear association is in doubt
The risks and rewards of covariate adjustment in randomized trials: an assessment of 12 outcomes from 8 studies
Adjustment for prognostic covariates can lead to increased power in the analysis of randomized trials. However, adjusted analyses are not often performed in practice
Adaptive design methods in clinical trials – a review
In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular due to its flexibility and efficiency. Based on adaptations applied, adaptive designs can be classified into three categories: prospective, concurrent (ad hoc), and retrospective adaptive designs. An adaptive design allows modifications made to trial and/or statistical procedures of ongoing clinical trials. However, it is a concern that the actual patient population after the adaptations could deviate from the originally target patient population and consequently the overall type I error (to erroneously claim efficacy for an infective drug) rate may not be controlled. In addition, major adaptations of trial and/or statistical procedures of on-going trials may result in a totally different trial that is unable to address the scientific/medical questions the trial intends to answer. In this article, several commonly considered adaptive designs in clinical trials are reviewed. Impacts of ad hoc adaptations (protocol amendments), challenges in by design (prospective) adaptations, and obstacles of retrospective adaptations are described. Strategies for the use of adaptive design in clinical development of rare diseases are discussed. Some examples concerning the development of Velcade intended for multiple myeloma and non-Hodgkin's lymphoma are given. Practical issues that are commonly encountered when implementing adaptive design methods in clinical trials are also discussed
- …