26,422 research outputs found

    On the Distribution of the Adaptive LASSO Estimator

    Get PDF
    We study the distribution of the adaptive LASSO estimator (Zou (2006)) in finite samples as well as in the large-sample limit. The large-sample distributions are derived both for the case where the adaptive LASSO estimator is tuned to perform conservative model selection as well as for the case where the tuning results in consistent model selection. We show that the finite-sample as well as the large-sample distributions are typically highly non-normal, regardless of the choice of the tuning parameter. The uniform convergence rate is also obtained, and is shown to be slower than n1/2n^{-1/2} in case the estimator is tuned to perform consistent model selection. In particular, these results question the statistical relevance of the `oracle' property of the adaptive LASSO estimator established in Zou (2006). Moreover, we also provide an impossibility result regarding the estimation of the distribution function of the adaptive LASSO estimator.The theoretical results, which are obtained for a regression model with orthogonal design, are complemented by a Monte Carlo study using non-orthogonal regressors.Comment: revised version; minor changes and some material adde

    Asymptotic Confidence Regions Based on the Adaptive Lasso with Partial Consistent Tuning

    Full text link
    We construct confidence sets based on an adaptive Lasso estimator with componentwise tuning in the framework of a low-dimensional linear regression model. We consider the case where at least one of the components is penalized at the rate of consistent model selection and where certain components may not be penalized at all. We perform a detailed study of the consistency properties and the asymptotic distribution that includes the effects of componentwise tuning within a so-called moving-parameter framework. These results enable us to explicitly provide a set M\mathcal{M} such that every open superset acts as a confidence set with uniform asymptotic coverage equal to 1 whereas every proper closed subset with non-empty interior is a confidence set with uniform asymptotic coverage equal to 0. The shape of the set M\mathcal{M} depends on the regressor matrix as well as the deviations within the componentwise tuning parameters. Our findings can be viewed as a generalization of P\"otscher & Schneider (2010) who considered confidence intervals based on components of the adaptive Lasso estimator for the case of orthogonal regressors

    Variable Selection in General Multinomial Logit Models

    Get PDF
    The use of the multinomial logit model is typically restricted to applications with few predictors, because in high-dimensional settings maximum likelihood estimates tend to deteriorate. In this paper we are proposing a sparsity-inducing penalty that accounts for the special structure of multinomial models. In contrast to existing methods, it penalizes the parameters that are linked to one variable in a grouped way and thus yields variable selection instead of parameter selection. We develop a proximal gradient method that is able to efficiently compute stable estimates. In addition, the penalization is extended to the important case of predictors that vary across response categories. We apply our estimator to the modeling of party choice of voters in Germany including voter-specific variables like age and gender but also party-specific features like stance on nuclear energy and immigration

    Active Inference for Integrated State-Estimation, Control, and Learning

    Full text link
    This work presents an approach for control, state-estimation and learning model (hyper)parameters for robotic manipulators. It is based on the active inference framework, prominent in computational neuroscience as a theory of the brain, where behaviour arises from minimizing variational free-energy. The robotic manipulator shows adaptive and robust behaviour compared to state-of-the-art methods. Additionally, we show the exact relationship to classic methods such as PID control. Finally, we show that by learning a temporal parameter and model variances, our approach can deal with unmodelled dynamics, damps oscillations, and is robust against disturbances and poor initial parameters. The approach is validated on the `Franka Emika Panda' 7 DoF manipulator.Comment: 7 pages, 6 figures, accepted for presentation at the International Conference on Robotics and Automation (ICRA) 202

    Scalable Sparse Cox's Regression for Large-Scale Survival Data via Broken Adaptive Ridge

    Full text link
    This paper develops a new scalable sparse Cox regression tool for sparse high-dimensional massive sample size (sHDMSS) survival data. The method is a local L0L_0-penalized Cox regression via repeatedly performing reweighted L2L_2-penalized Cox regression. We show that the resulting estimator enjoys the best of L0L_0- and L2L_2-penalized Cox regressions while overcoming their limitations. Specifically, the estimator is selection consistent, oracle for parameter estimation, and possesses a grouping property for highly correlated covariates. Simulation results suggest that when the sample size is large, the proposed method with pre-specified tuning parameters has a comparable or better performance than some popular penalized regression methods. More importantly, because the method naturally enables adaptation of efficient algorithms for massive L2L_2-penalized optimization and does not require costly data driven tuning parameter selection, it has a significant computational advantage for sHDMSS data, offering an average of 5-fold speedup over its closest competitor in empirical studies
    corecore