2 research outputs found

    Information-Based Optimal Subdata Selection for Big Data Linear Regression

    No full text
    <p>Extraordinary amounts of data are being produced in many branches of science. Proven statistical methods are no longer applicable with extraordinary large datasets due to computational limitations. A critical step in big data analysis is data reduction. Existing investigations in the context of linear regression focus on subsampling-based methods. However, not only is this approach prone to sampling errors, it also leads to a covariance matrix of the estimators that is typically bounded from below by a term that is of the order of the inverse of the subdata size. We propose a novel approach, termed information-based optimal subdata selection (IBOSS). Compared to leading existing subdata methods, the IBOSS approach has the following advantages: (i) it is significantly faster; (ii) it is suitable for distributed parallel computing; (iii) the variances of the slope parameter estimators converge to 0 as the full data size increases even if the subdata size is fixed, that is, the convergence rate depends on the full data size; (iv) data analysis for IBOSS subdata is straightforward and the sampling distribution of an IBOSS estimator is easy to assess. Theoretical results and extensive simulations demonstrate that the IBOSS approach is superior to subsampling-based methods, sometimes by orders of magnitude. The advantages of the new approach are also illustrated through analysis of real data. Supplementary materials for this article are available online.</p

    Independence-Encouraging Subsampling for Nonparametric Additive Models

    No full text
    The additive model is a popular nonparametric regression method due to its ability to retain modeling flexibility while avoiding the curse of dimensionality. The backfitting algorithm is an intuitive and widely used numerical approach for fitting additive models. However, its application to large datasets may incur a high computational cost and is thus infeasible in practice. To address this problem, we propose a novel approach called independence-encouraging subsampling (IES) to select a subsample from big data for training additive models. Inspired by the minimax optimality of an orthogonal array (OA) due to its pairwise independent predictors and uniform coverage for the range of each predictor, the IES approach selects a subsample that approximates an OA to achieve the minimax optimality. Our asymptotic analyses demonstrate that an IES subsample converges to an OA and that the backfitting algorithm over the subsample converges to a unique solution even if the predictors are highly dependent in the full data. The proposed IES method is shown to be numerically appealing via simulations and a real data application. Theoretical proofs, R codes, and supplementary numerical results are accessible online as supplementarymaterials.</p
    corecore