research

Randomization does not help much, comparability does

Abstract

Following Fisher, it is widely believed that randomization "relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed." In particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result. Looking for quantitative advice, we study a number of straightforward, mathematically simple models. However, they all demonstrate that the optimism with respect to randomization is wishful thinking rather than based on fact. In small to medium-sized samples, random allocation of units to treatments typically yields a considerable imbalance between the groups, i.e., confounding due to randomization is the rule rather than the exception. In the second part of this contribution, we extend the reasoning to a number of traditional arguments for and against randomization. This discussion is rather non-technical, and at times even "foundational" (Frequentist vs. Bayesian). However, its result turns out to be quite similar. While randomization's contribution remains questionable, comparability contributes much to a compelling conclusion. Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable

    Similar works

    Full text

    thumbnail-image