We study the problem of learning from multiple untrusted data sources, a
scenario of increasing practical relevance given the recent emergence of
crowdsourcing and collaborative learning paradigms. Specifically, we analyze
the situation in which a learning system obtains datasets from multiple
sources, some of which might be biased or even adversarially perturbed. It is
known that in the single-source case, an adversary with the power to corrupt a
fixed fraction of the training data can prevent PAC-learnability, that is, even
in the limit of infinitely much training data, no learning system can approach
the optimal test error. In this work we show that, surprisingly, the same is
not true in the multi-source setting, where the adversary can arbitrarily
corrupt a fixed fraction of the data sources. Our main results are a
generalization bound that provides finite-sample guarantees for this learning
setting, as well as corresponding lower bounds. Besides establishing
PAC-learnability our results also show that in a cooperative learning setting
sharing data with other parties has provable benefits, even if some
participants are malicious.Comment: International Conference on Machine Learning (ICML) 2020:
Camera-ready. Strengthened the definition of adversarial PAC-learnability,
added explicit bounds on sample complexit