35,954 research outputs found

    Finite sample performance of linear least squares estimators under sub-Gaussian martingale difference noise

    Full text link
    Linear Least Squares is a very well known technique for parameter estimation, which is used even when sub-optimal, because of its very low computational requirements and the fact that exact knowledge of the noise statistics is not required. Surprisingly, bounding the probability of large errors with finitely many samples has been left open, especially when dealing with correlated noise with unknown covariance. In this paper we analyze the finite sample performance of the linear least squares estimator under sub-Gaussian martingale difference noise. In order to analyze this important question we used concentration of measure bounds. When applying these bounds we obtained tight bounds on the tail of the estimator's distribution. We show the fast exponential convergence of the number of samples required to ensure a given accuracy with high probability. We provide probability tail bounds on the estimation error's norm. Our analysis method is simple and uses simple L∞L_{\infty} type bounds on the estimation error. The tightness of the bounds is tested through simulation. The proposed bounds make it possible to predict the number of samples required for least squares estimation even when least squares is sub-optimal and used for computational simplicity. The finite sample analysis of least squares models with this general noise model is novel

    Pivotal estimation via square-root Lasso in nonparametric regression

    Get PDF
    We propose a self-tuning Lasso\sqrt{\mathrm {Lasso}} method that simultaneously resolves three important practical problems in high-dimensional regression analysis, namely it handles the unknown scale, heteroscedasticity and (drastic) non-Gaussianity of the noise. In addition, our analysis allows for badly behaved designs, for example, perfectly collinear regressors, and generates sharp bounds even in extreme cases, such as the infinite variance case and the noiseless case, in contrast to Lasso. We establish various nonasymptotic bounds for Lasso\sqrt{\mathrm {Lasso}} including prediction norm rate and sparsity. Our analysis is based on new impact factors that are tailored for bounding prediction norm. In order to cover heteroscedastic non-Gaussian noise, we rely on moderate deviation theory for self-normalized sums to achieve Gaussian-like results under weak conditions. Moreover, we derive bounds on the performance of ordinary least square (ols) applied to the model selected by Lasso\sqrt{\mathrm {Lasso}} accounting for possible misspecification of the selected model. Under mild conditions, the rate of convergence of ols post Lasso\sqrt{\mathrm {Lasso}} is as good as Lasso\sqrt{\mathrm {Lasso}}'s rate. As an application, we consider the use of Lasso\sqrt{\mathrm {Lasso}} and ols post Lasso\sqrt{\mathrm {Lasso}} as estimators of nuisance parameters in a generic semiparametric problem (nonlinear moment condition or ZZ-problem), resulting in a construction of n\sqrt{n}-consistent and asymptotically normal estimators of the main parameters.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1204 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore