Recent advances in Post-Selection Inference have shown that conditional
testing is relevant and tractable in high-dimensions. In the Gaussian linear
model, further works have derived unconditional test statistics such as the
Kac-Rice Pivot for general penalized problems. In order to test the global
null, a prominent offspring of this breakthrough is the spacing test that
accounts the relative separation between the first two knots of the celebrated
least-angle regression (LARS) algorithm. However, no results have been shown
regarding the distribution of these test statistics under the alternative. For
the first time, this paper addresses this important issue for the spacing test
and shows that it is unconditionally unbiased. Furthermore, we provide the
first extension of the spacing test to the frame of unknown noise variance.
More precisely, we investigate the power of the spacing test for LARS and
prove that it is unbiased: its power is always greater or equal to the
significance level α. In particular, we describe the power of this test
under various scenarii: we prove that its rejection region is optimal when the
predictors are orthogonal; as the level α goes to zero, we show that the
probability of getting a true positive is much greater than α; and we
give a detailed description of its power in the case of two predictors.
Moreover, we numerically investigate a comparison between the spacing test for
LARS and the Pearson's chi-squared test (goodness of fit).Comment: 22 pages, 8 figure