Penalized regression models such as the Lasso have proved useful for variable
selection in many fields - especially for situations with high-dimensional data
where the numbers of predictors far exceeds the number of observations. These
methods identify and rank variables of importance but do not generally provide
any inference of the selected variables. Thus, the variables selected might be
the "most important" but need not be significant. We propose a significance
test for the selection found by the Lasso. We introduce a procedure that
computes inference and p-values for features chosen by the Lasso. This method
rephrases the null hypothesis and uses a randomization approach which ensures
that the error rate is controlled even for small samples. We demonstrate the
ability of the algorithm to compute p-values of the expected magnitude with
simulated data using a multitude of scenarios that involve various effects
strengths and correlation between predictors. The algorithm is also applied to
a prostate cancer dataset that has been analyzed in recent papers on the
subject. The proposed method is found to provide a powerful way to make
inference for feature selection even for small samples and when the number of
predictors are several orders of magnitude larger than the number of
observations. The algorithm is implemented in the MESS package in R and is
freely available