105 research outputs found
Finite sample approximation results for principal component analysis: a matrix perturbation approach
Principal component analysis (PCA) is a standard tool for dimensional
reduction of a set of observations (samples), each with variables. In
this paper, using a matrix perturbation approach, we study the nonasymptotic
relation between the eigenvalues and eigenvectors of PCA computed on a finite
sample of size , and those of the limiting population PCA as .
As in machine learning, we present a finite sample theorem which holds with
high probability for the closeness between the leading eigenvalue and
eigenvector of sample PCA and population PCA under a spiked covariance model.
In addition, we also consider the relation between finite sample PCA and the
asymptotic results in the joint limit , with . We present
a matrix perturbation view of the "phase transition phenomenon," and a simple
linear-algebra based derivation of the eigenvalue and eigenvector overlap in
this asymptotic limit. Moreover, our analysis also applies for finite
where we show that although there is no sharp phase transition as in the
infinite case, either as a function of noise level or as a function of sample
size , the eigenvector of sample PCA may exhibit a sharp "loss of tracking,"
suddenly losing its relation to the (true) eigenvector of the population PCA
matrix. This occurs due to a crossover between the eigenvalue due to the signal
and the largest eigenvalue due to noise, whose eigenvector points in a random
direction.Comment: Published in at http://dx.doi.org/10.1214/08-AOS618 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
On the Optimality of Averaging in Distributed Statistical Learning
A common approach to statistical learning with big-data is to randomly split
it among machines and learn the parameter of interest by averaging the
individual estimates. In this paper, focusing on empirical risk minimization,
or equivalently M-estimation, we study the statistical error incurred by this
strategy. We consider two large-sample settings: First, a classical setting
where the number of parameters is fixed, and the number of samples per
machine . Second, a high-dimensional regime where both
with . For both regimes and under
suitable assumptions, we present asymptotically exact expressions for this
estimation error. In the fixed- setting, under suitable assumptions, we
prove that to leading order averaging is as accurate as the centralized
solution. We also derive the second order error terms, and show that these can
be non-negligible, notably for non-linear models. The high-dimensional setting,
in contrast, exhibits a qualitatively different behavior: data splitting incurs
a first-order accuracy loss, which to leading order increases linearly with the
number of machines. The dependence of our error approximations on the number of
machines traces an interesting accuracy-complexity tradeoff, allowing the
practitioner an informed choice on the number of machines to deploy. Finally,
we confirm our theoretical analysis with several simulations.Comment: Major changes from previous version. Particularly on the second order
error approximation and implication
- β¦