116,061 research outputs found
A Risk Comparison of Ordinary Least Squares vs Ridge Regression
We compare the risk of ridge regression to a simple variant of ordinary least
squares, in which one simply projects the data onto a finite dimensional
subspace (as specified by a Principal Component Analysis) and then performs an
ordinary (un-regularized) least squares regression in this subspace. This note
shows that the risk of this ordinary least squares method is within a constant
factor (namely 4) of the risk of ridge regression.Comment: Appearing in JMLR 14, June 201
A Statistical Perspective on Randomized Sketching for Ordinary Least-Squares
We consider statistical as well as algorithmic aspects of solving large-scale
least-squares (LS) problems using randomized sketching algorithms. For a LS
problem with input data , sketching algorithms use a sketching matrix, with . Then, rather than solving the LS problem using the
full data , sketching algorithms solve the LS problem using only the
sketched data . Prior work has typically adopted an algorithmic
perspective, in that it has made no statistical assumptions on the input
and , and instead it has been assumed that the data are fixed and
worst-case (WC). Prior results show that, when using sketching matrices such as
random projections and leverage-score sampling algorithms, with ,
the WC error is the same as solving the original problem, up to a small
constant. From a statistical perspective, we typically consider the
mean-squared error performance of randomized sketching algorithms, when data
are generated according to a statistical model , where is a noise process. We provide a rigorous
comparison of both perspectives leading to insights on how they differ. To do
this, we first develop a framework for assessing algorithmic and statistical
aspects of randomized sketching methods. We then consider the statistical
prediction efficiency (PE) and the statistical residual efficiency (RE) of the
sketched LS estimator; and we use our framework to provide upper bounds for
several types of random projection and random sampling sketching algorithms.
Among other results, we show that the RE can be upper bounded when while the PE typically requires the sample size to be substantially
larger. Lower bounds developed in subsequent results show that our upper bounds
on PE can not be improved.Comment: 27 pages, 5 figure
Stein-Rule Estimation under an Extended Balanced Loss Function
This paper extends the balanced loss function to a more general set
up. The ordinary least squares and Stein-rule estimators are exposed to
this general loss function with quadratic loss structure in a linear regression
model. Their risks are derived when the disturbances in the linear regression
model are not necessarily normally distributed. The dominance of ordinary
least squares and Stein-rule estimators over each other and the effect of
departure from normality assumption of disturbances on the risk property
is studied
Temporal Aggregation and Ordinary Least Squares Estimation of Cointegrating Regressions
The paper derives the asymptotic distribution of the ordinary least squares estimator of cointegrating vectors with temporally aggregated time series. It is shown, that temporal aggregation reduces the bias and variance of the estimator for average sampling (temporal aggregation of flow series) and does not affect the limiting distribution for systematic sampling (temporal aggregation of stock series). A Monte Carlo experiment shows the consistency of the finite sample results with the asymptotic theory.
Simultaneous Prediction of Actual and Average Values of Study Variable Using Stein-rule Estimators
The simultaneous prediction of average and actual values of study variable in a linear regression model is considered in this paper. Generally, either of the ordinary least squares estimator or Stein-rule estimators are employed for the construction of predictors for the simultaneous prediction. A linear combination of ordinary least squares and Stein-rule predictors are utilized in this paper to construct an improved predictors. Their efficiency properties are derived using the small disturbance asymptotic theory and dominance conditions for the superiority of predictors over each other are analyzed
- …