150 research outputs found
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum--Norm Interpolated Classifiers
This paper establishes a precise high-dimensional asymptotic theory for
boosting on separable data, taking statistical and computational perspectives.
We consider a high-dimensional setting where the number of features (weak
learners) scales with the sample size , in an overparametrized regime.
Under a class of statistical models, we provide an exact analysis of the
generalization error of boosting when the algorithm interpolates the training
data and maximizes the empirical -margin. Further, we explicitly pin
down the relation between the boosting test error and the optimal Bayes error,
as well as the proportion of active features at interpolation (with zero
initialization). In turn, these precise characterizations answer certain
questions raised in \cite{breiman1999prediction, schapire1998boosting}
surrounding boosting, under assumed data generating processes. At the heart of
our theory lies an in-depth study of the maximum--margin, which can be
accurately described by a new system of non-linear equations; to analyze this
margin, we rely on Gaussian comparison techniques and develop a novel uniform
deviation argument. Our statistical and computational arguments can handle (1)
any finite-rank spiked covariance model for the feature distribution and (2)
variants of boosting corresponding to general -geometry, . As a final component, via the Lindeberg principle, we establish a
universality result showcasing that the scaled -margin (asymptotically)
remains the same, whether the covariates used for boosting arise from a
non-linear random feature model or an appropriately linearized model with
matching moments.Comment: 68 pages, 4 figure
Tight bounds for maximum -margin classifiers
Popular iterative algorithms such as boosting methods and coordinate descent
on linear models converge to the maximum -margin classifier, a.k.a.
sparse hard-margin SVM, in high dimensional regimes where the data is linearly
separable. Previous works consistently show that many estimators relying on the
-norm achieve improved statistical rates for hard sparse ground truths.
We show that surprisingly, this adaptivity does not apply to the maximum
-margin classifier for a standard discriminative setting. In
particular, for the noiseless setting, we prove tight upper and lower bounds
for the prediction error that match existing rates of order
for general ground truths. To complete the
picture, we show that when interpolating noisy observations, the error vanishes
at a rate of order . We are therefore first to show
benign overfitting for the maximum -margin classifier
Benign Overfitting in Multiclass Classification: All Roads Lead to Interpolation
The growing literature on "benign overfitting" in overparameterized models
has been mostly restricted to regression or binary classification settings;
however, most success stories of modern machine learning have been recorded in
multiclass settings. Motivated by this discrepancy, we study benign overfitting
in multiclass linear classification. Specifically, we consider the following
popular training algorithms on separable data: (i) empirical risk minimization
(ERM) with cross-entropy loss, which converges to the multiclass support vector
machine (SVM) solution; (ii) ERM with least-squares loss, which converges to
the min-norm interpolating (MNI) solution; and, (iii) the one-vs-all SVM
classifier. First, we provide a simple sufficient condition under which all
three algorithms lead to classifiers that interpolate the training data and
have equal accuracy. When the data is generated from Gaussian mixtures or a
multinomial logistic model, this condition holds under high enough effective
overparameterization. Second, we derive novel error bounds on the accuracy of
the MNI classifier, thereby showing that all three training algorithms lead to
benign overfitting under sufficient overparameterization. Ultimately, our
analysis shows that good generalization is possible for SVM solutions beyond
the realm in which typical margin-based bounds apply
Recommended from our members
Blessings and Curses of Covariate Shifts: Adversarial Learning Dynamics, Directional Convergence, and Equilibria
Covariate distribution shifts and adversarial perturbations present robustness challenges to the conventional statistical learning framework: mild shifts in the test covariate distribution can significantly affect the performance of the statistical model learned based on the training distribution. The model performance typically deteriorates when extrapolation happens: namely, covariates shift to a region where the training distribution is scarce, and naturally, the learned model has little information. For robustness and regularization considerations, adversarial perturbation techniques are proposed as a remedy; however, careful study needs to be carried out about what extrapolation region adversarial covariate shift will focus on, given a learned model. This paper precisely characterizes the extrapolation region, examining both regression and classification in an infinite-dimensional setting. We study the implications of adversarial covariate shifts to subsequent learning of the equilibrium---the Bayes optimal model---in a sequential game framework. We exploit the dynamics of the adversarial learning game and reveal the curious effects of the covariate shift to equilibrium learning and experimental design. In particular, we establish two directional convergence results that exhibit distinctive phenomena: (1) a blessing in regression, the adversarial covariate shifts in an exponential rate to an optimal experimental design for rapid subsequent learning; (2) a curse in classification, the adversarial covariate shifts in a subquadratic rate to the hardest experimental design trapping subsequent learning
- β¦