150 research outputs found

    A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-β„“1\ell_1-Norm Interpolated Classifiers

    Full text link
    This paper establishes a precise high-dimensional asymptotic theory for boosting on separable data, taking statistical and computational perspectives. We consider a high-dimensional setting where the number of features (weak learners) pp scales with the sample size nn, in an overparametrized regime. Under a class of statistical models, we provide an exact analysis of the generalization error of boosting when the algorithm interpolates the training data and maximizes the empirical β„“1\ell_1-margin. Further, we explicitly pin down the relation between the boosting test error and the optimal Bayes error, as well as the proportion of active features at interpolation (with zero initialization). In turn, these precise characterizations answer certain questions raised in \cite{breiman1999prediction, schapire1998boosting} surrounding boosting, under assumed data generating processes. At the heart of our theory lies an in-depth study of the maximum-β„“1\ell_1-margin, which can be accurately described by a new system of non-linear equations; to analyze this margin, we rely on Gaussian comparison techniques and develop a novel uniform deviation argument. Our statistical and computational arguments can handle (1) any finite-rank spiked covariance model for the feature distribution and (2) variants of boosting corresponding to general β„“q\ell_q-geometry, q∈[1,2]q \in [1, 2]. As a final component, via the Lindeberg principle, we establish a universality result showcasing that the scaled β„“1\ell_1-margin (asymptotically) remains the same, whether the covariates used for boosting arise from a non-linear random feature model or an appropriately linearized model with matching moments.Comment: 68 pages, 4 figure

    Tight bounds for maximum β„“1\ell_1-margin classifiers

    Full text link
    Popular iterative algorithms such as boosting methods and coordinate descent on linear models converge to the maximum β„“1\ell_1-margin classifier, a.k.a. sparse hard-margin SVM, in high dimensional regimes where the data is linearly separable. Previous works consistently show that many estimators relying on the β„“1\ell_1-norm achieve improved statistical rates for hard sparse ground truths. We show that surprisingly, this adaptivity does not apply to the maximum β„“1\ell_1-margin classifier for a standard discriminative setting. In particular, for the noiseless setting, we prove tight upper and lower bounds for the prediction error that match existing rates of order βˆ₯wβˆ—βˆ₯12/3n1/3\frac{\|w^*\|_1^{2/3}}{n^{1/3}} for general ground truths. To complete the picture, we show that when interpolating noisy observations, the error vanishes at a rate of order 1log⁑(d/n)\frac{1}{\sqrt{\log(d/n)}}. We are therefore first to show benign overfitting for the maximum β„“1\ell_1-margin classifier

    Benign Overfitting in Multiclass Classification: All Roads Lead to Interpolation

    Full text link
    The growing literature on "benign overfitting" in overparameterized models has been mostly restricted to regression or binary classification settings; however, most success stories of modern machine learning have been recorded in multiclass settings. Motivated by this discrepancy, we study benign overfitting in multiclass linear classification. Specifically, we consider the following popular training algorithms on separable data: (i) empirical risk minimization (ERM) with cross-entropy loss, which converges to the multiclass support vector machine (SVM) solution; (ii) ERM with least-squares loss, which converges to the min-norm interpolating (MNI) solution; and, (iii) the one-vs-all SVM classifier. First, we provide a simple sufficient condition under which all three algorithms lead to classifiers that interpolate the training data and have equal accuracy. When the data is generated from Gaussian mixtures or a multinomial logistic model, this condition holds under high enough effective overparameterization. Second, we derive novel error bounds on the accuracy of the MNI classifier, thereby showing that all three training algorithms lead to benign overfitting under sufficient overparameterization. Ultimately, our analysis shows that good generalization is possible for SVM solutions beyond the realm in which typical margin-based bounds apply
    • …
    corecore