Gradient-based learning algorithms have an implicit simplicity bias which in
effect can limit the diversity of predictors being sampled by the learning
procedure. This behavior can hinder the transferability of trained models by
(i) favoring the learning of simpler but spurious features -- present in the
training data but absent from the test data -- and (ii) by only leveraging a
small subset of predictive features. Such an effect is especially magnified
when the test distribution does not exactly match the train distribution --
referred to as the Out of Distribution (OOD) generalization problem. However,
given only the training data, it is not always possible to apriori assess if a
given feature is spurious or transferable. Instead, we advocate for learning an
ensemble of models which capture a diverse set of predictive features. Towards
this, we propose a new algorithm D-BAT (Diversity-By-disAgreement Training),
which enforces agreement among the models on the training data, but
disagreement on the OOD data. We show how D-BAT naturally emerges from the
notion of generalized discrepancy, as well as demonstrate in multiple
experiments how the proposed method can mitigate shortcut-learning, enhance
uncertainty and OOD detection, as well as improve transferability.Comment: 23 pages, 17 figure