2 research outputs found
CLIMB: Curriculum Learning for Infant-inspired Model Building
We describe our team's contribution to the STRICT-SMALL track of the BabyLM
Challenge. The challenge requires training a language model from scratch using
only a relatively small training dataset of ten million words. We experiment
with three variants of cognitively-motivated curriculum learning and analyze
their effect on the performance of the model on linguistic evaluation tasks. In
the vocabulary curriculum, we analyze methods for constraining the vocabulary
in the early stages of training to simulate cognitively more plausible learning
curves. In the data curriculum experiments, we vary the order of the training
instances based on i) infant-inspired expectations and ii) the learning
behavior of the model. In the objective curriculum, we explore different
variations of combining the conventional masked language modeling task with a
more coarse-grained word class prediction task to reinforce linguistic
generalization capabilities. Our results did not yield consistent improvements
over our own non-curriculum learning baseline across a range of linguistic
benchmarks; however, we do find marginal gains on select tasks. Our analysis
highlights key takeaways for specific combinations of tasks and settings which
benefit from our proposed curricula. We moreover determine that careful
selection of model architecture, and training hyper-parameters yield
substantial improvements over the default baselines provided by the BabyLM
challenge
CLIMB: Curriculum Learning for Infant-inspired Model Building
We describe our team's contribution to the STRICT-SMALL track of the BabyLM Challenge. The challenge requires training a language model from scratch using only a relatively small training dataset of ten million words. We experiment with three variants of cognitively-motivated curriculum learning and analyze their effect on the performance of the model on linguistic evaluation tasks. In the vocabulary curriculum, we analyze methods for constraining the vocabulary in the early stages of training to simulate cognitively more plausible learning curves. In the data curriculum experiments, we vary the order of the training instances based on i) infant-inspired expectations and ii) the learning behavior of the model. In the objective curriculum, we explore different variations of combining the conventional masked language modeling task with a more coarse-grained word class prediction task to reinforce linguistic generalization capabilities. Our results did not yield consistent improvements over our own non-curriculum learning baseline across a range of linguistic benchmarks; however, we do find marginal gains on select tasks. Our analysis highlights key takeaways for specific combinations of tasks and settings which benefit from our proposed curricula. We moreover determine that careful selection of model architecture, and training hyper-parameters yield substantial improvements over the default baselines provided by the BabyLM challenge