157,384 research outputs found
Measurement Error in Lasso: Impact and Correction
Regression with the lasso penalty is a popular tool for performing dimension
reduction when the number of covariates is large. In many applications of the
lasso, like in genomics, covariates are subject to measurement error. We study
the impact of measurement error on linear regression with the lasso penalty,
both analytically and in simulation experiments. A simple method of correction
for measurement error in the lasso is then considered. In the large sample
limit, the corrected lasso yields sign consistent covariate selection under
conditions very similar to the lasso with perfect measurements, whereas the
uncorrected lasso requires much more stringent conditions on the covariance
structure of the data. Finally, we suggest methods to correct for measurement
error in generalized linear models with the lasso penalty, which we study
empirically in simulation experiments with logistic regression, and also apply
to a classification problem with microarray data. We see that the corrected
lasso selects less false positives than the standard lasso, at a similar level
of true positives. The corrected lasso can therefore be used to obtain more
conservative covariate selection in genomic analysis
Least squares after model selection in high-dimensional sparse models
In this article we study post-model selection estimators that apply ordinary
least squares (OLS) to the model selected by first-step penalized estimators,
typically Lasso. It is well known that Lasso can estimate the nonparametric
regression function at nearly the oracle rate, and is thus hard to improve
upon. We show that the OLS post-Lasso estimator performs at least as well as
Lasso in terms of the rate of convergence, and has the advantage of a smaller
bias. Remarkably, this performance occurs even if the Lasso-based model
selection "fails" in the sense of missing some components of the "true"
regression model. By the "true" model, we mean the best s-dimensional
approximation to the nonparametric regression function chosen by the oracle.
Furthermore, OLS post-Lasso estimator can perform strictly better than Lasso,
in the sense of a strictly faster rate of convergence, if the Lasso-based model
selection correctly includes all components of the "true" model as a subset and
also achieves sufficient sparsity. In the extreme case, when Lasso perfectly
selects the "true" model, the OLS post-Lasso estimator becomes the oracle
estimator. An important ingredient in our analysis is a new sparsity bound on
the dimension of the model selected by Lasso, which guarantees that this
dimension is at most of the same order as the dimension of the "true" model.
Our rate results are nonasymptotic and hold in both parametric and
nonparametric models. Moreover, our analysis is not limited to the Lasso
estimator acting as a selector in the first step, but also applies to any other
estimator, for example, various forms of thresholded Lasso, with good rates and
good sparsity properties. Our analysis covers both traditional thresholding and
a new practical, data-driven thresholding scheme that induces additional
sparsity subject to maintaining a certain goodness of fit. The latter scheme
has theoretical guarantees similar to those of Lasso or OLS post-Lasso, but it
dominates those procedures as well as traditional thresholding in a wide
variety of experiments.Comment: Published in at http://dx.doi.org/10.3150/11-BEJ410 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models
Constructing confidence intervals for the coefficients of high-dimensional
sparse linear models remains a challenge, mainly because of the complicated
limiting distributions of the widely used estimators, such as the lasso.
Several methods have been developed for constructing such intervals. Bootstrap
lasso+ols is notable for its technical simplicity, good interpretability, and
performance that is comparable with that of other more complicated methods.
However, bootstrap lasso+ols depends on the beta-min assumption, a theoretic
criterion that is often violated in practice. Thus, we introduce a new method,
called bootstrap lasso+partial ridge, to relax this assumption. Lasso+partial
ridge is a two-stage estimator. First, the lasso is used to select features.
Then, the partial ridge is used to refit the coefficients. Simulation results
show that bootstrap lasso+partial ridge outperforms bootstrap lasso+ols when
there exist small, but nonzero coefficients, a common situation that violates
the beta-min assumption. For such coefficients, the confidence intervals
constructed using bootstrap lasso+partial ridge have, on average, larger
coverage probabilities than those of bootstrap lasso+ols. Bootstrap
lasso+partial ridge also has, on average, shorter confidence interval
lengths than those of the de-sparsified lasso methods, regardless of whether
the linear models are misspecified. Additionally, we provide theoretical
guarantees for bootstrap lasso+partial ridge under appropriate conditions, and
implement it in the R package "HDCI.
Omitted variable bias of Lasso-based inference methods: A finite sample analysis
We study the finite sample behavior of Lasso-based inference methods such as
post double Lasso and debiased Lasso. We show that these methods can exhibit
substantial omitted variable biases (OVBs) due to Lasso not selecting relevant
controls. This phenomenon can occur even when the coefficients are sparse and
the sample size is large and larger than the number of controls. Therefore,
relying on the existing asymptotic inference theory can be problematic in
empirical applications. We compare the Lasso-based inference methods to modern
high-dimensional OLS-based methods and provide practical guidance
Random lasso
We propose a computationally intensive method, the random lasso method, for
variable selection in linear models. The method consists of two major steps. In
step 1, the lasso method is applied to many bootstrap samples, each using a set
of randomly selected covariates. A measure of importance is yielded from this
step for each covariate. In step 2, a similar procedure to the first step is
implemented with the exception that for each bootstrap sample, a subset of
covariates is randomly selected with unequal selection probabilities determined
by the covariates' importance. Adaptive lasso may be used in the second step
with weights determined by the importance measures. The final set of covariates
and their coefficients are determined by averaging bootstrap results obtained
from step 2. The proposed method alleviates some of the limitations of lasso,
elastic-net and related methods noted especially in the context of microarray
data analysis: it tends to remove highly correlated variables altogether or
select them all, and maintains maximal flexibility in estimating their
coefficients, particularly with different signs; the number of selected
variables is no longer limited by the sample size; and the resulting prediction
accuracy is competitive or superior compared to the alternatives. We illustrate
the proposed method by extensive simulation studies. The proposed method is
also applied to a Glioblastoma microarray data analysis.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS377 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …
