57 research outputs found

    Assessing subgroup effects with binary data: can the use of different effect measures lead to different conclusions?

    Get PDF
    BACKGROUND: In order to use the results of a randomised trial, it is necessary to understand whether the overall observed benefit or harm applies to all individuals, or whether some subgroups receive more benefit or harm than others. This decision is commonly guided by a statistical test for interaction. However, with binary outcomes, different effect measures yield different interaction tests. For example, the UK Hip trial explored the impact of ultrasound of infants with suspected hip dysplasia on the occurrence of subsequent hip treatment. Risk ratios were similar between subgroups defined by level of clinical suspicion (P = 0.14), but odds ratios and risk differences differed strongly between subgroups (P < 0.001). DISCUSSION: Interaction tests on different effect measures differ because they test different null hypotheses. A graphical technique demonstrates that the difference arises when the subgroup risks differ markedly. We consider that the test of interaction acts as a check on the applicability of the trial results to all included subgroups. The test of interaction should therefore be applied to the effect measure which is least likely a priori to exhibit an interaction. We give examples of how this might be done. SUMMARY: The choice of interaction test is especially important when the risk of a binary outcome varies widely between subgroups. The interaction test should be pre-specified and should be guided by clinical knowledge.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are

    Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects.

    Get PDF
    OBJECTIVES: This study aims to (1) elucidate whether the Hawthorne effect exists, (2) explore under what conditions, and (3) estimate the size of any such effect. STUDY DESIGN AND SETTING: This systematic review summarizes and evaluates the strength of available evidence on the Hawthorne effect. An inclusive definition of any form of research artifact on behavior using this label, and without cointerventions, was adopted. RESULTS: Nineteen purposively designed studies were included, providing quantitative data on the size of the effect in eight randomized controlled trials, five quasiexperimental studies, and six observational evaluations of reporting on one's behavior by answering questions or being directly observed and being aware of being studied. Although all but one study was undertaken within health sciences, study methods, contexts, and findings were highly heterogeneous. Most studies reported some evidence of an effect, although significant biases are judged likely because of the complexity of the evaluation object. CONCLUSION: Consequences of research participation for behaviors being investigated do exist, although little can be securely known about the conditions under which they operate, their mechanisms of effects, or their magnitudes. New concepts are needed to guide empirical studies

    Statistical methods for non-adherence in non-inferiority trials: useful and used? A systematic review.

    Get PDF
    BACKGROUND: In non-inferiority trials with non-adherence to interventions (or non-compliance), intention-to-treat and per-protocol analyses are often performed; however, non-random non-adherence generally biases these estimates of efficacy. OBJECTIVE: To identify statistical methods that adjust for the impact of non-adherence and thus estimate the causal effects of experimental interventions in non-inferiority trials. DESIGN: A systematic review was conducted by searching the Ovid MEDLINE database (31 December 2020) to identify (1) randomised trials with a primary analysis for non-inferiority that applied (or planned to apply) statistical methods to account for the impact of non-adherence to interventions, and (2) methodology papers that described such statistical methods and included a non-inferiority trial application. OUTCOMES: The statistical methods identified, their impacts on non-inferiority conclusions, and their advantages/disadvantages. RESULTS: A total of 24 papers were included (4 protocols, 13 results papers and 7 methodology papers) reporting relevant methods on 26 occasions. The most common were instrumental variable approaches (n=9), including observed adherence as a covariate within a regression model (n=3), and modelling adherence as a time-varying covariate in a time-to-event analysis (n=3). Other methods included rank preserving structural failure time models and inverse-probability-of-treatment weighting. The methods identified in protocols and results papers were more commonly specified as sensitivity analyses (n=13) than primary analyses (n=3). Twelve results papers included an alternative analysis of the same outcome; conclusions regarding non-inferiority were in agreement on six occasions and could not be compared on six occasions (different measures of effect or results not provided in full). CONCLUSIONS: Available statistical methods which attempt to account for the impact of non-adherence to interventions were used infrequently. Therefore, firm inferences about their influence on non-inferiority conclusions could not be drawn. Since intention-to-treat and per-protocol analyses do not guarantee unbiased conclusions regarding non-inferiority, the methods identified should be considered for use in sensitivity analyses. PROSPERO REGISTRATION NUMBER: CRD42020177458

    Contamination in trials of educational interventions

    Get PDF
    Objectives: To consider the effects of contamination on the magnitude and statistical significance (or precision) of the estimated effect of an educational intervention, to investigate the mechanisms of contamination, and to consider how contamination can be avoided. Data sources: Major electronic databases were searched up to May 2005. Methods: An exploratory literature search was conducted. The results of trials included in previous relevant systematic reviews were then analysed to see whether studies that avoided contamination resulted in larger effect estimates than those that did not. Experts’ opinions were elicited about factors more or less likely to lead to contamination. We simulated contamination processes to compare contamination biases between cluster and individually randomised trials. Statistical adjustment was made for contamination using Complier Average Causal Effect analytic methods, using published and simulated data. The bias and power of cluster and individually randomised trials were compared, as were Complier Average Causal Effect, intention-to-treat and per protocol methods of analysis. Results: Few relevant studies quantified contamination. Experts largely agreed on where contamination was more or less likely. Simulation of contamination processes showed that, with various combinations of timing, intensity and baseline dependence of contamination, cluster randomised trials might produce biases greater than or similar to those of individually randomised trials. Complier Average Causal Effect analyses produced results that were less biased than intention-to-treat or per protocol analyses. They also showed that individually randomised trials would in most situations be more powerful than cluster randomised trials despite contamination. Conclusions: The probability, nature and process of contamination should be considered when designing and analysing controlled trials of educational interventions in health. Cluster randomisation may or may not be appropriate and should not be uncritically assumed always to be a solution. Complier Average Causal Effect models are an appropriate way to adjust for contamination if it can be measured. When conducting such trials in future, it is a priority to report the extent, nature and effects of contamination.We are grateful to the National Health Service Research and Development National Coordinating Centre for Research Methodology for funding this research

    Marketing and clinical trials: a case study.

    Get PDF
    BACKGROUND: Publicly funded clinical trials require a substantial commitment of time and money. To ensure that sufficient numbers of patients are recruited it is essential that they address important questions in a rigorous manner and are managed well, adopting effective marketing strategies. METHODS: Using methods of analysis drawn from management studies, this paper presents a structured assessment framework or reference model, derived from a case analysis of the MRC's CRASH trial, of 12 factors that may affect the success of the marketing and sales activities associated with clinical trials. RESULTS: The case study demonstrates that trials need various categories of people to buy in - hence, to be successful, trialists must embrace marketing strategies to some extent. CONCLUSION: The performance of future clinical trials could be enhanced if trialists routinely considered these factors
    corecore