BACKGROUND:
While the randomised controlled trial (RCT) is generally regarded as the design of
choice for assessing the effects of health care, within the social sciences there is
considerable debate about the relative suitability of RCTs and non-randomised
studies (NRSs) for evaluating public policy interventions.
// OBJECTIVES:
To determine whether RCTs lead to the same effect size and variance as NRSs of
similar policy interventions; and whether these findings can be explained by other
factors associated with the interventions or their evaluation.
// METHODS:
Analyses of methodological studies, empirical reviews, and individual health and
social services studies investigated the relationship between randomisation and
effect size of policy interventions by:
1) Comparing controlled trials that are identical in all respects other than the use of
randomisation by 'breaking' the randomisation in a trial to create non-randomised
trials (re-sampling studies).
2) Comparing randomised and non-randomised arms of controlled trials mounted
simultaneously in the field (replication studies).
3) Comparing similar controlled trials drawn from systematic reviews that include
both randomised and non-randomised studies (structured narrative reviews and
sensitivity analyses within meta-analyses).
4) Investigating associations between randomisation and effect size using a pool of
more diverse RCTs and NRSs within broadly similar areas (meta-epidemiology).
// RESULTS:
Prior methodological reviews and meta-analyses of existing reviews comparing
effects from RCTs and nRCTs suggested that effect sizes from RCTs and nRCTs
may indeed differ in some circumstances and that these differences may well be
associated with factors confounded with design.
Re-sampling studies offer no evidence that the absence of randomisation directly
influences the effect size of policy interventions in a systematic way. No consistent
explanations were found for randomisation being associated with changes in effect
sizes of policy interventions in field trials