research
Constraining generalisation in artificial language learning : children are rational too
- Publication date
- Publisher
Abstract
Successful language acquisition involves generalization, but learners must balance this against the acquisition of lexical constraints. Examples occur throughout language. For example, English native speakers know that certain noun-adjective combinations are impermissible (e.g. strong winds, high winds, strong breezes, *high breezes). Another example is the restrictions imposed by verb subcategorization, (e.g. I gave/sent/threw the ball to him; I gave/sent/threw him the ball; donated/carried/pushed the ball to him; * I donated/carried/pushed him the ball). Such lexical
exceptions have been considered problematic for acquisition: if learners generalize abstract patterns
to new words, how do they learn that certain specific combinations are restricted? (Baker, 1979).
Certain researchers have proposed domain-specific procedures (e.g. Pinker, 1989 resolves verb subcategorization in terms of subtle semantic distinctions). An alternative approach is that learners are
sensitive to distributional statistics and use this information to make inferences about when
generalization is appropriate (Braine, 1971).
A series of Artificial Language Learning experiments have demonstrated that adult learners can utilize
statistical information in a rational manner when determining constraints on verb argument-structure
generalization (Wonnacott, Newport & Tanenhaus, 2008). The current work extends these findings to
children in a different linguistic domain (learning relationships between nouns and particles). We also
demonstrate computationally that these results are consistent with the predictions of domain-general
hierarchical Bayesian model (cf. Kemp, Perfors & Tenebaum, 2007)