88,592 research outputs found

    Estimation in high-dimensional linear models with deterministic design matrices

    Full text link
    Because of the advance in technologies, modern statistical studies often encounter linear models with the number of explanatory variables much larger than the sample size. Estimation and variable selection in these high-dimensional problems with deterministic design points is very different from those in the case of random covariates, due to the identifiability of the high-dimensional regression parameter vector. We show that a reasonable approach is to focus on the projection of the regression parameter vector onto the linear space generated by the design matrix. In this work, we consider the ridge regression estimator of the projection vector and propose to threshold the ridge regression estimator when the projection vector is sparse in the sense that many of its components are small. The proposed estimator has an explicit form and is easy to use in application. Asymptotic properties such as the consistency of variable selection and estimation and the convergence rate of the prediction mean squared error are established under some sparsity conditions on the projection vector. A simulation study is also conducted to examine the performance of the proposed estimator.Comment: Published in at http://dx.doi.org/10.1214/12-AOS982 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The Smooth-Lasso and other â„“1+â„“2\ell_1+\ell_2-penalized methods

    Full text link
    We consider a linear regression problem in a high dimensional setting where the number of covariates pp can be much larger than the sample size nn. In such a situation, one often assumes sparsity of the regression vector, \textit i.e., the regression vector contains many zero components. We propose a Lasso-type estimator β^Quad\hat{\beta}^{Quad} (where 'QuadQuad' stands for quadratic) which is based on two penalty terms. The first one is the ℓ1\ell_1 norm of the regression coefficients used to exploit the sparsity of the regression as done by the Lasso estimator, whereas the second is a quadratic penalty term introduced to capture some additional information on the setting of the problem. We detail two special cases: the Elastic-Net β^EN\hat{\beta}^{EN}, which deals with sparse problems where correlations between variables may exist; and the Smooth-Lasso β^SL\hat{\beta}^{SL}, which responds to sparse problems where successive regression coefficients are known to vary slowly (in some situations, this can also be interpreted in terms of correlations between successive variables). From a theoretical point of view, we establish variable selection consistency results and show that β^Quad\hat{\beta}^{Quad} achieves a Sparsity Inequality, \textit i.e., a bound in terms of the number of non-zero components of the 'true' regression vector. These results are provided under a weaker assumption on the Gram matrix than the one used by the Lasso. In some situations this guarantees a significant improvement over the Lasso. Furthermore, a simulation study is conducted and shows that the S-Lasso β^SL\hat{\beta}^{SL} performs better than known methods as the Lasso, the Elastic-Net β^EN\hat{\beta}^{EN}, and the Fused-Lasso with respect to the estimation accuracy. This is especially the case when the regression vector is 'smooth', \textit i.e., when the variations between successive coefficients of the unknown parameter of the regression are small. The study also reveals that the theoretical calibration of the tuning parameters and the one based on 10 fold cross validation imply two S-Lasso solutions with close performance

    Partial Consistency with Sparse Incidental Parameters

    Full text link
    Penalized estimation principle is fundamental to high-dimensional problems. In the literature, it has been extensively and successfully applied to various models with only structural parameters. As a contrast, in this paper, we apply this penalization principle to a linear regression model with a finite-dimensional vector of structural parameters and a high-dimensional vector of sparse incidental parameters. For the estimators of the structural parameters, we derive their consistency and asymptotic normality, which reveals an oracle property. However, the penalized estimators for the incidental parameters possess only partial selection consistency but not consistency. This is an interesting partial consistency phenomenon: the structural parameters are consistently estimated while the incidental ones cannot. For the structural parameters, also considered is an alternative two-step penalized estimator, which has fewer possible asymptotic distributions and thus is more suitable for statistical inferences. We further extend the methods and results to the case where the dimension of the structural parameter vector diverges with but slower than the sample size. A data-driven approach for selecting a penalty regularization parameter is provided. The finite-sample performance of the penalized estimators for the structural parameters is evaluated by simulations and a real data set is analyzed
    • …
    corecore