104 research outputs found

    Nonadherence to treatment protocol in published randomised controlled trials: a review.

    Get PDF
    RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are.This review aimed to ascertain the extent to which nonadherence to treatment protocol is reported and addressed in a cohort of published analyses of randomised controlled trials (RCTs). One hundred publications of RCTs, randomly selected from those published in BMJ, New England Journal of Medicine, the Journal of the American Medical Association and The Lancet during 2008, were reviewed to determine the extent and nature of reported nonadherence to treatment protocol, and whether statistical methods were used to examine the effect of such nonadherence on both benefit and harms analyses. We also assessed the quality of trial reporting of treatment protocol nonadherence and the quality of reporting of the statistical analysis methods used to investigate such nonadherence. Nonadherence to treatment protocol was reported in 98 of the 100 trials, but reporting on such nonadherence was often vague or incomplete. Forty-two publications did not state how many participants started their randomised treatment. Reporting of treatment initiation and completeness was judged to be inadequate in 64% of trials with short-term interventions and 89% of trials with long-term interventions. More than half (51) of the 98 trials with treatment protocol nonadherence implemented some statistical method to address this issue, most commonly based on per protocol analysis (46) but often labelled as intention to treat (ITT) or modified ITT (23 analyses in 22 trials). The composition of analysis sets for their benefit outcomes were not explained in 57% of trials, and 62% of trials that presented harms analyses did not define harms analysis populations. The majority of defined harms analysis populations (18 out of 26 trials, 69%) were based on actual treatment received, while the majority of trials with undefined harms analysis populations (31 out of 43 trials, 72%) appeared to analyse harms using the ITT approach. Adherence to randomised intervention is poorly considered in the reporting and analysis of published RCTs. The majority of trials are subject to various forms of nonadherence to treatment protocol, and though trialists deal with this nonadherence using a variety of statistical methods and analysis populations, they rarely consider the potential for bias introduced. There is a need for increased awareness of more appropriate causal methods to adjust for departures from treatment protocol, as well as guidance on the appropriate analysis population to use for harms outcomes in the presence of such nonadherence

    A framework for the design, conduct and interpretation of randomised controlled trials in the presence of treatment changes.

    Get PDF
    BACKGROUND: When a randomised trial is subject to deviations from randomised treatment, analysis according to intention-to-treat does not estimate two important quantities: relative treatment efficacy and effectiveness in a setting different from that in the trial. Even in trials of a predominantly pragmatic nature, there may be numerous reasons to consider the extent, and impact on analysis, of such deviations from protocol. Simple methods such as per-protocol or as-treated analyses, which exclude or censor patients on the basis of their adherence, usually introduce selection and confounding biases. However, there exist appropriate causal estimation methods which seek to overcome these inherent biases, but these methods remain relatively unfamiliar and are rarely implemented in trials. METHODS: This paper demonstrates when it may be of interest to look beyond intention-to-treat analysis for answers to alternative causal research questions through illustrative case studies. We seek to guide trialists on how to handle treatment changes in the design, conduct and planning the analysis of a trial; these changes may be planned or unplanned, and may or may not be permitted in the protocol. We highlight issues that must be considered at the trial planning stage relating to: the definition of nonadherence and the causal research question of interest, trial design, data collection, monitoring, statistical analysis and sample size. RESULTS AND CONCLUSIONS: During trial planning, trialists should define their causal research questions of interest, anticipate the likely extent of treatment changes and use these to inform trial design, including the extent of data collection and data monitoring. A series of concise recommendations is presented to guide trialists when considering undertaking causal analyses

    Modelling departure from randomised treatment in randomised controlled trials with survival outcomes

    Get PDF
    Randomised controlled trials are considered the gold standard study design, as random treatment assignment provides balance in prognosis between treatment arms and protects against selection bias. When trials are subject to departures from randomised treatment, however, simple but naïve statistical methods that purport to estimate treatment efficacy, such as per protocol or as treated analyses, fail to respect this randomisation balance and typically introduce selection bias. This bias occurs because departure from randomised treatment is often clinically indicated, resulting in systematic differences between patients who do and do not adhere to their assigned intervention. There exist more appropriate statistical methods to adjust for departure from randomised treatment but, as demonstrated by a review of published trials, these are rarely employed, primarily due to their complexity and unfamiliarity. The focus of this research has been to explore, explain, demonstrate and compare the use of causal methodologies in the analysis of trials, in order to increase the accessibility and comprehensibility by non-specialist analysts of the available, but somewhat technical, statistical methods to adjust for treatment deviations. An overview of such methods is presented, intended as an aid to researchers new to the field of causal inference, with an emphasis on practical considerations necessary to ensure appropriate implementation of techniques, and complemented by a number of guidance tools summarising the necessary clinical and statistical considerations when carrying out such analyses. Practical demonstrations of causal analysis techniques are then presented, with existing methods extended and adapted to allow for complexities arising from the trial scenarios. A particular application from epilepsy demonstrates the impact of various statistical factors when adjusting for skewed time-varying confounders and different reasons for treatment changes on a complicated time to event outcome, including choice of model (pooled logistic regression versus Cox models for inverse probability of censoring weighting methods, compared with a rank-preserving structural failure time model), time interval (for creating panel data for time-varying confounders and outcome), confidence interval estimation method (standard versus bootstrapped) and the considerations regarding use of spline variables to estimate underlying risk in pooled logistic regression. In this example, the structural failure time model is severely limited by its restriction on the types of treatment changes that can be adjusted for; as such, the majority of treatment changes are necessarily censored, introducing bias similar to that in a per protocol analysis. With inverse probability weighting adjustment, as more treatment changes and confounders are accounted for, treatment effects are observed to move further away from the null. Generally, Cox models seemed to be more susceptible to changes in modelling factors (confidence interval estimation, time interval and confounder adjustment) and displayed greater fluctuations in treatment effect than corresponding pooled logistic regression models. This apparent greater stability of logistic regression, even when subject to severe overfitting, represents a major advantage over Cox modelling in this context, countering the inherent complications relating to the fitting of spline variables. This novel application of complex methods in a complicated trial scenario provides a useful example for discussion of typical analysis issues and limitations, as it addresses challenges that are likely to be common in trials featuring problems with nonadherence. Recommendations are provided for analysts when considering which of these analysis methods should be applied in a given trial setting

    Analysis of responder-based endpoints: improving power through utilising continuous components

    Get PDF
    Abstract: Background: Clinical trials and other studies commonly assess the effectiveness of an intervention through the use of responder-based endpoints. These classify patients based on whether they meet a number of criteria which often involve continuous variables categorised as being above or below a threshold. The proportion of patients who are responders is estimated and, where relevant, compared between groups. An alternative method called the augmented binary method keeps the definition of the endpoint the same but utilises information contained within the continuous component to increase the power considerably (equivalent to increasing the sample size by > 30%). In this article we summarise the method and investigate the variety of clinical conditions that use endpoints to which it could be applied. Methods: We reviewed a database of core outcome sets (COSs) that covered physiological and mortality trial endpoints recommended for collection in clinical trials of different disorders. We identified responder-based endpoints where the augmented binary method would be useful for increasing power. Results: Out of the 287 COSs reviewed, we identified 67 new clinical areas where endpoints were used that would be more efficiently analysed using the augmented binary method. Clinical areas that had particularly high numbers were rheumatology (11 clinical disorders identified), non-solid tumour oncology (10 identified), neurology (9 identified) and cardiovascular (8 identified). Conclusions: The augmented binary method can potentially provide large benefits in a vast array of clinical areas. Further methodological development is needed to account for some types of endpoints

    A systematic review describes models for recruitment prediction at the design stage of a clinical trial

    Get PDF
    Objective Patient recruitment in clinical trials is challenging with failure to recruit to time and target sample size common. This may be caused by unanticipated problems or by overestimation of the recruitment rate. This study is a systematic review of statistical models to predict recruitment at the design stage of clinical trials. Study Design and Setting The Online Resource for Recruitment research in Clinical triAls database was searched to identify articles published between 2008 and 2016. Articles published before 2008 were identified from a relevant systematic review. Google search was used to find potential methods in gray literature. Results Thirteen eligible articles were identified of which, 11 focused on stochastic approaches, one on deterministic models, and one included both stochastic and deterministic methods. Models varied considerably in the factors included and in their complexity. Key aspects included their ability to condition on time; whether they used average or center-specific recruitment rates; and assumptions around center initiation rates. Lack of flexibility of some models restricts their implementation. Conclusion Deterministic models require specification of few parameters but are likely unrealistic although easy to implement. Increasingly, stochastic models require greater parameter specification, which, along with greater complexity may be a barrier to their implementation

    Detect and Classify -- Joint Span Detection and Classification for Health Outcomes

    Get PDF
    A health outcome is a measurement or an observation used to capture and assess the effect of a treatment. Automatic detection of health outcomes from text would undoubtedly speed up access to evidence necessary in healthcare decision making. Prior work on outcome detection has modelled this task as either (a) a sequence labelling task, where the goal is to detect which text spans describe health outcomes, or (b) a classification task, where the goal is to classify a text into a pre-defined set of categories depending on an outcome that is mentioned somewhere in that text. However, this decoupling of span detection and classification is problematic from a modelling perspective and ignores global structural correspondences between sentence-level and word-level information present in a given text. To address this, we propose a method that uses both word-level and sentence-level information to simultaneously perform outcome span detection and outcome type classification. In addition to injecting contextual information to hidden vectors, we use label attention to appropriately weight both word and sentence level information. Experimental results on several benchmark datasets for health outcome detection show that our proposed method consistently outperforms decoupled methods, reporting competitive results
    corecore