18,225 research outputs found
Improved Generalization Bounds for Robust Learning
We consider a model of robust learning in an adversarial environment. The
learner gets uncorrupted training data with access to possible corruptions that
may be affected by the adversary during testing. The learner's goal is to build
a robust classifier that would be tested on future adversarial examples. We use
a zero-sum game between the learner and the adversary as our game theoretic
framework. The adversary is limited to possible corruptions for each input.
Our model is closely related to the adversarial examples model of Schmidt et
al. (2018); Madry et al. (2017).
Our main results consist of generalization bounds for the binary and
multi-class classification, as well as the real-valued case (regression). For
the binary classification setting, we both tighten the generalization bound of
Feige, Mansour, and Schapire (2015), and also are able to handle an infinite
hypothesis class . The sample complexity is improved from
to
. Additionally, we
extend the algorithm and generalization bound from the binary to the multiclass
and real-valued cases. Along the way, we obtain results on fat-shattering
dimension and Rademacher complexity of -fold maxima over function classes;
these may be of independent interest.
For binary classification, the algorithm of Feige et al. (2015) uses a regret
minimization algorithm and an ERM oracle as a blackbox; we adapt it for the
multi-class and regression settings. The algorithm provides us with
near-optimal policies for the players on a given training sample.Comment: Appearing at the 30th International Conference on Algorithmic
Learning Theory (ALT 2019
- …