8,466 research outputs found
High-Dimensional Feature Selection by Feature-Wise Kernelized Lasso
The goal of supervised feature selection is to find a subset of input
features that are responsible for predicting output values. The least absolute
shrinkage and selection operator (Lasso) allows computationally efficient
feature selection based on linear dependency between input features and output
values. In this paper, we consider a feature-wise kernelized Lasso for
capturing non-linear input-output dependency. We first show that, with
particular choices of kernel functions, non-redundant features with strong
statistical dependence on output values can be found in terms of kernel-based
independence measures. We then show that the globally optimal solution can be
efficiently computed; this makes the approach scalable to high-dimensional
problems. The effectiveness of the proposed method is demonstrated through
feature selection experiments with thousands of features.Comment: 18 page
Localized Lasso for High-Dimensional Regression
We introduce the localized Lasso, which is suited for learning models that
are both interpretable and have a high predictive power in problems with high
dimensionality and small sample size . More specifically, we consider a
function defined by local sparse models, one at each data point. We introduce
sample-wise network regularization to borrow strength across the models, and
sample-wise exclusive group sparsity (a.k.a., norm) to introduce
diversity into the choice of feature sets in the local models. The local models
are interpretable in terms of similarity of their sparsity patterns. The cost
function is convex, and thus has a globally optimal solution. Moreover, we
propose a simple yet efficient iterative least-squares based optimization
procedure for the localized Lasso, which does not need a tuning parameter, and
is guaranteed to converge to a globally optimal solution. The solution is
empirically shown to outperform alternatives for both simulated and genomic
personalized medicine data
Inference for feature selection using the Lasso with high-dimensional data
Penalized regression models such as the Lasso have proved useful for variable
selection in many fields - especially for situations with high-dimensional data
where the numbers of predictors far exceeds the number of observations. These
methods identify and rank variables of importance but do not generally provide
any inference of the selected variables. Thus, the variables selected might be
the "most important" but need not be significant. We propose a significance
test for the selection found by the Lasso. We introduce a procedure that
computes inference and p-values for features chosen by the Lasso. This method
rephrases the null hypothesis and uses a randomization approach which ensures
that the error rate is controlled even for small samples. We demonstrate the
ability of the algorithm to compute -values of the expected magnitude with
simulated data using a multitude of scenarios that involve various effects
strengths and correlation between predictors. The algorithm is also applied to
a prostate cancer dataset that has been analyzed in recent papers on the
subject. The proposed method is found to provide a powerful way to make
inference for feature selection even for small samples and when the number of
predictors are several orders of magnitude larger than the number of
observations. The algorithm is implemented in the MESS package in R and is
freely available
- …