17 research outputs found
Exact testing with random permutations
When permutation methods are used in practice, often a limited number of
random permutations are used to decrease the computational burden. However,
most theoretical literature assumes that the whole permutation group is used,
and methods based on random permutations tend to be seen as approximate. There
exists a very limited amount of literature on exact testing with random
permutations and only recently a thorough proof of exactness was given. In this
paper we provide an alternative proof, viewing the test as a "conditional Monte
Carlo test" as it has been called in the literature. We also provide extensions
of the result. Importantly, our results can be used to prove properties of
various multiple testing procedures based on random permutations
On the term "randomization test"
There exists no consensus on the meaning of the term "randomization test".
Contradicting uses of the term are leading to confusion, misunderstandings and
indeed invalid data analyses. As we point out, a main source of the confusion
is that the term was not explicitly defined when it was first used in the
1930's. Later authors made clear proposals to reach a consensus regarding the
term. This resulted in some level of agreement around the 1970's. However, in
the last few decades, the term has often been used in ways that contradict
these proposals. This paper provides an overview of the history of the term per
se, for the first time tracing it back to 1937. This will hopefully lead to
more agreement on terminology and less confusion on the related fundamental
concepts
More Efficient Exact Group-Invariance Testing: using a Representative Subgroup
Non-parametric tests based on permutation, rotation or sign-flipping are
examples of group-invariance tests. These tests test invariance of the null
distribution under a set of transformations that has a group structure, in the
algebraic sense. Such groups are often huge, which makes it computationally
infeasible to test using the entire group. Hence, it is standard practice to
test using a randomly sampled set of transformations from the group. This
random sample still needs to be substantial to obtain good power and
replicability. We improve upon this standard practice by using a well-designed
subgroup of transformations instead of a random sample. The resulting
subgroup-invariance test is still exact, as invariance under a group implies
invariance under its subgroups.
We illustrate this in a generalized location model and obtain more powerful
tests based on the same number of transformations. In particular, we show that
a subgroup-invariance test is consistent for lower signal-to-noise ratios than
a test based on a random sample. For the special case of a normal location
model and a particular design of the subgroup, we show that the power
improvement is equivalent to the power difference between a Monte Carlo
-test and a Monte Carlo -test
Only Closed Testing Procedures are Admissible for Controlling False Discovery Proportions
We consider the class of all multiple testing methods controlling tail
probabilities of the false discovery proportion, either for one random set or
simultaneously for many such sets. This class encompasses methods controlling
familywise error rate, generalized familywise error rate, false discovery
exceedance, joint error rate, simultaneous control of all false discovery
proportions, and others, as well as seemingly unrelated methods such as gene
set testing in genomics and cluster inference methods in neuroimaging. We show
that all such methods are either equivalent to a closed testing method, or are
uniformly improved by one. Moreover, we show that a closed testing method is
admissible as a method controlling tail probabilities of false discovery
proportions if and only if all its local tests are admissible. This implies
that, when designing such methods, it is sufficient to restrict attention to
closed testing methods only. We demonstrate the practical usefulness of this
design principle by constructing a uniform improvement of a recently proposed
method
Robust testing in generalized linear models by sign-flipping score contributions
Generalized linear models are often misspecified due to overdispersion,
heteroscedasticity and ignored nuisance variables. Existing quasi-likelihood
methods for testing in misspecified models often do not provide satisfactory
type-I error rate control. We provide a novel semi-parametric test, based on
sign-flipping individual score contributions. The tested parameter is allowed
to be multi-dimensional and even high-dimensional. Our test is often robust
against the mentioned forms of misspecification and provides better type-I
error control than its competitors. When nuisance parameters are estimated, our
basic test becomes conservative. We show how to take nuisance estimation into
account to obtain an asymptotically exact test. Our proposed test is
asymptotically equivalent to its parametric counterpart.Comment: To appear in Journal of the Royal Statistical Society: Series B
(Methodology). Early view version (2020
More efficient exact group invariance testing:using a representative subgroup
We consider testing invariance of a distribution under an algebraic group of transformations, such as permutations or sign flips. As such groups are typically huge, tests based on the full group are often computationally infeasible. Hence, it is standard practice to use a random subset of transformations. We improve upon this by replacing the random subset with a strategically chosen, fixed subgroup of transformations. In a generalized location model, we show that the resulting tests are often consistent for lower signal-to-noise ratios. Moreover, we establish an analogy between the power improvement and switching from a t-test to a Z-test under normality. Importantly, in permutation-based multiple testing, the efficiency gain with our approach can be huge, since we attain the same power with many fewer permutations
More efficient exact group invariance testing:using a representative subgroup
We consider testing invariance of a distribution under an algebraic group of transformations, such as permutations or sign flips. As such groups are typically huge, tests based on the full group are often computationally infeasible. Hence, it is standard practice to use a random subset of transformations. We improve upon this by replacing the random subset with a strategically chosen, fixed subgroup of transformations. In a generalized location model, we show that the resulting tests are often consistent for lower signal-to-noise ratios. Moreover, we establish an analogy between the power improvement and switching from a t-test to a Z-test under normality. Importantly, in permutation-based multiple testing, the efficiency gain with our approach can be huge, since we attain the same power with many fewer permutations