Masked pre-training removes random input dimensions and learns a model that
can predict the missing values. Empirical results indicate that this intuitive
form of self-supervised learning yields models that generalize very well to new
domains. A theoretical understanding is, however, lacking. This paper shows
that masked pre-training with a suitable cumulative scoring function
corresponds to maximizing the model's marginal likelihood, which is de facto
the Bayesian model selection measure of generalization. Beyond shedding light
on the success of masked pre-training, this insight also suggests that Bayesian
models can be trained with appropriately designed self-supervision.
Empirically, we confirm the developed theory and explore the main learning
principles of masked pre-training in large language models