244 research outputs found

    Statistical Estimation Procedures for the ''burn-in'' Process

    Get PDF
    Statistical estimation procedures for identifying and eliminating poor quality or defective item

    Adaptive design methods in clinical trials – a review

    Get PDF
    In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular due to its flexibility and efficiency. Based on adaptations applied, adaptive designs can be classified into three categories: prospective, concurrent (ad hoc), and retrospective adaptive designs. An adaptive design allows modifications made to trial and/or statistical procedures of ongoing clinical trials. However, it is a concern that the actual patient population after the adaptations could deviate from the originally target patient population and consequently the overall type I error (to erroneously claim efficacy for an infective drug) rate may not be controlled. In addition, major adaptations of trial and/or statistical procedures of on-going trials may result in a totally different trial that is unable to address the scientific/medical questions the trial intends to answer. In this article, several commonly considered adaptive designs in clinical trials are reviewed. Impacts of ad hoc adaptations (protocol amendments), challenges in by design (prospective) adaptations, and obstacles of retrospective adaptations are described. Strategies for the use of adaptive design in clinical development of rare diseases are discussed. Some examples concerning the development of Velcade intended for multiple myeloma and non-Hodgkin's lymphoma are given. Practical issues that are commonly encountered when implementing adaptive design methods in clinical trials are also discussed

    Spectacular bodies, unsettling objects: material performance as intervention in stereotypes of refugees

    Get PDF
    The body of Palestinian refugee puppetry artist Husam Abed co-exists in Dafa Puppet Theatre’s 'The Smooth Life' as spectator, puppeteer, and performer as he manipulates pieces of wood, cardboard, photographs, and grains of rice to construct his story of growing up in a refugee camp. Given the recent attention to how material and technological intersections with human bodies reconstruct stereotypes in liberatory ways in performance, this analysis explores the ways in which material performance practice can intervene in stereotyped media-driven representations of refugee bodies. Refugee bodies in such representations are fixed outside of agentive possibility: mediated, materially absent, unmournable. These stereotyped bodies circulate in multiple media narratives that represent the refugee body on a spectrum from threat/contamination to pitiable/victim, narratives that provoke affective responses while foreclosing meaningful intervention. Through analysing the puppetry/object theatre piece 'The Smooth Life', Purcell-Gates explores how these material performance practices unsettle and disrupt this spectrum of stereotypes

    Assessing potential sources of clustering in individually randomised trials

    Get PDF
    Recent reviews have shown that while clustering is extremely common in individually randomised trials (for example, clustering within centre, therapist, or surgeon), it is rarely accounted for in the trial analysis. Our aim is to develop a general framework for assessing whether potential sources of clustering must be accounted for in the trial analysis to obtain valid type I error rates (non-ignorable clustering), with a particular focus on individually randomised trials

    Designs for clinical trials with time-to-event outcomes based on stopping guidelines for lack of benefit

    Get PDF
    <p>Abstract</p> <p>background</p> <p>The pace of novel medical treatments and approaches to therapy has accelerated in recent years. Unfortunately, many potential therapeutic advances do not fulfil their promise when subjected to randomized controlled trials. It is therefore highly desirable to speed up the process of evaluating new treatment options, particularly in phase II and phase III trials. To help realize such an aim, in 2003, Royston and colleagues proposed a class of multi-arm, two-stage trial designs intended to eliminate poorly performing contenders at a first stage (point in time). Only treatments showing a predefined degree of advantage against a control treatment were allowed through to a second stage. Arms that survived the first-stage comparison on an intermediate outcome measure entered a second stage of patient accrual, culminating in comparisons against control on the definitive outcome measure. The intermediate outcome is typically on the causal pathway to the definitive outcome (i.e. the features that cause an intermediate event also tend to cause a definitive event), an example in cancer being progression-free and overall survival. Although the 2003 paper alluded to multi-arm trials, most of the essential design features concerned only two-arm trials. Here, we extend the two-arm designs to allow an arbitrary number of stages, thereby increasing flexibility by building in several 'looks' at the accumulating data. Such trials can terminate at any of the intermediate stages or the final stage.</p> <p>Methods</p> <p>We describe the trial design and the mathematics required to obtain the timing of the 'looks' and the overall significance level and power of the design. We support our results by extensive simulation studies. As an example, we discuss the design of the STAMPEDE trial in prostate cancer.</p> <p>Results</p> <p>The mathematical results on significance level and power are confirmed by the computer simulations. Our approach compares favourably with methodology based on beta spending functions and on monitoring only a primary outcome measure for lack of benefit of the new treatment.</p> <p>Conclusions</p> <p>The new designs are practical and are supported by theory. They hold considerable promise for speeding up the evaluation of new treatments in phase II and III trials.</p

    Accounting for centre-effects in multicentre trials with a binary outcome - when, why, and how?

    Get PDF
    BACKGROUND: It is often desirable to account for centre-effects in the analysis of multicentre randomised trials, however it is unclear which analysis methods are best in trials with a binary outcome. METHODS: We compared the performance of four methods of analysis (fixed-effects models, random-effects models, generalised estimating equations (GEE), and Mantel-Haenszel) using a re-analysis of a previously reported randomised trial (MIST2) and a large simulation study. RESULTS: The re-analysis of MIST2 found that fixed-effects and Mantel-Haenszel led to many patients being dropped from the analysis due to over-stratification (up to 69% dropped for Mantel-Haenszel, and up to 33% dropped for fixed-effects). Conversely, random-effects and GEE included all patients in the analysis, however GEE did not reach convergence. Estimated treatment effects and p-values were highly variable across different analysis methods. The simulation study found that most methods of analysis performed well with a small number of centres. With a large number of centres, fixed-effects led to biased estimates and inflated type I error rates in many situations, and Mantel-Haenszel lost power compared to other analysis methods in some situations. Conversely, both random-effects and GEE gave nominal type I error rates and good power across all scenarios, and were usually as good as or better than either fixed-effects or Mantel-Haenszel. However, this was only true for GEEs with non-robust standard errors (SEs); using a robust ‘sandwich’ estimator led to inflated type I error rates across most scenarios. CONCLUSIONS: With a small number of centres, we recommend the use of fixed-effects, random-effects, or GEE with non-robust SEs. Random-effects and GEE with non-robust SEs should be used with a moderate or large number of centres

    Endpoints for randomized controlled clinical trials for COVID-19 treatments

    Get PDF
    Background: Endpoint choice for randomized controlled trials of treatments for novel coronavirus-induced disease (COVID-19) is complex. Trials must start rapidly to identify treatments that can be used as part of the outbreak response, in the midst of considerable uncertainty and limited information. COVID-19 presentation is heterogeneous, ranging from mild disease that improves within days to critical disease that can last weeks to over a month and can end in death. While improvement in mortality would provide unquestionable evidence about the clinical significance of a treatment, sample sizes for a study evaluating mortality are large and may be impractical, particularly given a multitude of putative therapies to evaluate. Furthermore, patient states in between “cure” and “death” represent meaningful distinctions. Clinical severity scores have been proposed as an alternative. However, the appropriate summary measure for severity scores has been the subject of debate, particularly given the variable time course of COVID-19. Outcomes measured at fixed time points, such as a comparison of severity scores between treatment and control at day 14, may risk missing the time of clinical benefit. An endpoint such as time to improvement (or recovery) avoids the timing problem. However, some have argued that power losses will result from reducing the ordinal scale to a binary state of “recovered” versus “not recovered.” Methods: We evaluate statistical power for possible trial endpoints for COVID-19 treatment trials using simulation models and data from two recent COVID-19 treatment trials. Results: Power for fixed time-point methods depends heavily on the time selected for evaluation. Time-to-event approaches have reasonable statistical power, even when compared with a fixed time-point method evaluated at the optimal time. Discussion: Time-to-event analysis methods have advantages in the COVID-19 setting, unless the optimal time for evaluating treatment effect is known in advance. Even when the optimal time is known, a time-to-event approach may increase power for interim analyses. © The Author(s) 2020

    COMPASS identifies T-cell subsets correlated with clinical outcomes.

    Get PDF
    Advances in flow cytometry and other single-cell technologies have enabled high-dimensional, high-throughput measurements of individual cells as well as the interrogation of cell population heterogeneity. However, in many instances, computational tools to analyze the wealth of data generated by these technologies are lacking. Here, we present a computational framework for unbiased combinatorial polyfunctionality analysis of antigen-specific T-cell subsets (COMPASS). COMPASS uses a Bayesian hierarchical framework to model all observed cell subsets and select those most likely to have antigen-specific responses. Cell-subset responses are quantified by posterior probabilities, and human subject-level responses are quantified by two summary statistics that describe the quality of an individual's polyfunctional response and can be correlated directly with clinical outcome. Using three clinical data sets of cytokine production, we demonstrate how COMPASS improves characterization of antigen-specific T cells and reveals cellular 'correlates of protection/immunity' in the RV144 HIV vaccine efficacy trial that are missed by other methods. COMPASS is available as open-source software
    corecore