A wide range of machine learning algorithms iteratively add data to the
training sample. Examples include semi-supervised learning, active learning,
multi-armed bandits, and Bayesian optimization. We embed this kind of data
addition into decision theory by framing data selection as a decision problem.
This paves the way for finding Bayes-optimal selections of data. For the
illustrative case of self-training in semi-supervised learning, we derive the
respective Bayes criterion. We further show that deploying this criterion
mitigates the issue of confirmation bias by empirically assessing our method
for generalized linear models, semi-parametric generalized additive models, and
Bayesian neural networks on simulated and real-world data.Comment: 5th Workshop on Data-Centric Machine Learning Research (DMLR) at ICML
202