Modern statistical analysis often encounters datasets with large sizes. For
these datasets, conventional estimation methods can hardly be used immediately
because practitioners often suffer from limited computational resources. In
most cases, they do not have powerful computational resources (e.g., Hadoop or
Spark). How to practically analyze large datasets with limited computational
resources then becomes a problem of great importance. To solve this problem, we
propose here a novel subsampling-based method with jackknifing. The key idea is
to treat the whole sample data as if they were the population. Then, multiple
subsamples with greatly reduced sizes are obtained by the method of simple
random sampling with replacement. It is remarkable that we do not recommend
sampling methods without replacement because this would incur a significant
cost for data processing on the hard drive. Such cost does not exist if the
data are processed in memory. Because subsampled data have relatively small
sizes, they can be comfortably read into computer memory as a whole and then
processed easily. Based on subsampled datasets, jackknife-debiased estimators
can be obtained for the target parameter. The resulting estimators are
statistically consistent, with an extremely small bias. Finally, the
jackknife-debiased estimators from different subsamples are averaged together
to form the final estimator. We theoretically show that the final estimator is
consistent and asymptotically normal. Its asymptotic statistical efficiency can
be as good as that of the whole sample estimator under very mild conditions.
The proposed method is simple enough to be easily implemented on most practical
computer systems and thus should have very wide applicability