How do learners acquire languages from the limited data available to them?
This process must involve some inductive biases - factors that affect how a
learner generalizes - but it is unclear which inductive biases can explain
observed patterns in language acquisition. To facilitate computational modeling
aimed at addressing this question, we introduce a framework for giving
particular linguistic inductive biases to a neural network model; such a model
can then be used to empirically explore the effects of those inductive biases.
This framework disentangles universal inductive biases, which are encoded in
the initial values of a neural network's parameters, from non-universal
factors, which the neural network must learn from data in a given language. The
initial state that encodes the inductive biases is found with meta-learning, a
technique through which a model discovers how to acquire new languages more
easily via exposure to many possible languages. By controlling the properties
of the languages that are used during meta-learning, we can control the
inductive biases that meta-learning imparts. We demonstrate this framework with
a case study based on syllable structure. First, we specify the inductive
biases that we intend to give our model, and then we translate those inductive
biases into a space of languages from which a model can meta-learn. Finally,
using existing analysis techniques, we verify that our approach has imparted
the linguistic inductive biases that it was intended to impart.Comment: To appear in the Proceedings of the 42nd Annual Conference of the
Cognitive Science Societ