3 research outputs found

    Optimal choice of kk for kk-nearest neighbor regression

    Full text link
    The kk-nearest neighbor algorithm (kk-NN) is a widely used non-parametric method for classification and regression. We study the mean squared error of the kk-NN estimator when kk is chosen by leave-one-out cross-validation (LOOCV). Although it was known that this choice of kk is asymptotically consistent, it was not known previously that it is an optimal kk. We show, with high probability, the mean squared error of this estimator is close to the minimum mean squared error using the kk-NN estimate, where the minimum is over all choices of kk

    A Locally Adaptive Interpretable Regression

    Full text link
    Machine learning models with both good predictability and high interpretability are crucial for decision support systems. Linear regression is one of the most interpretable prediction models. However, the linearity in a simple linear regression worsens its predictability. In this work, we introduce a locally adaptive interpretable regression (LoAIR). In LoAIR, a metamodel parameterized by neural networks predicts percentile of a Gaussian distribution for the regression coefficients for a rapid adaptation. Our experimental results on public benchmark datasets show that our model not only achieves comparable or better predictive performance than the other state-of-the-art baselines but also discovers some interesting relationships between input and target variables such as a parabolic relationship between CO2 emissions and Gross National Product (GNP). Therefore, LoAIR is a step towards bridging the gap between econometrics, statistics, and machine learning by improving the predictive ability of linear regression without depreciating its interpretability

    Minimax Rate Optimal Adaptive Nearest Neighbor Classification and Regression

    Full text link
    k Nearest Neighbor (kNN) method is a simple and popular statistical method for classification and regression. For both classification and regression problems, existing works have shown that, if the distribution of the feature vector has bounded support and the probability density function is bounded away from zero in its support, the convergence rate of the standard kNN method, in which k is the same for all test samples, is minimax optimal. On the contrary, if the distribution has unbounded support, we show that there is a gap between the convergence rate achieved by the standard kNN method and the minimax bound. To close this gap, we propose an adaptive kNN method, in which different k is selected for different samples. Our selection rule does not require precise knowledge of the underlying distribution of features. The new proposed method significantly outperforms the standard one. We characterize the convergence rate of the proposed adaptive method, and show that it matches the minimax lower bound
    corecore