3 research outputs found
Recommended from our members
Modeling Substitution Errors in Spanish Morphology Learning
In early stages of language acquisition, children often make inflectional errors on regular verbs, e.g., Spanish-speaking children produce –a (present-tense 3rd person singular) when other inflections are expected. Most previous models of morphology learning have focused on later stages of learning relating to productionof irregular verbs. We propose a computational model of Spanish inflection learning to examine the earlier stages of learning and present a novel data set of gold-standard inflectional annotations for Spanish verbs. Our model replicatesdata from Spanish-learning children, capturing the acquisition order of different inflections and correctly predicting the substitution errors they make. Analyses show that the learning trajectory can be explained as a result of the gradualacquisition of inflection-meaning associations. Ours is the first computational model to provide an explanation for this acquisition trajectory in Spanish, and represents a theoretical advance more generally in explaining substitution errors in early morphology learning
Recommended from our members
Modeling Substitution Errors in Spanish Morphology Learning
In early stages of language acquisition, children often make inflectional errors on regular verbs, e.g., Spanish-speaking children produce –a (present-tense 3rd person singular) when other inflections are expected. Most previous models of morphology learning have focused on later stages of learning relating to the production of irregular verbs. We propose a computational model of Spanish inflection learning to examine the earlier stages of learning and present a novel data set of gold-standard inflectional annotations for Spanish verbs. Our model replicates data from Spanish-learning children, capturing the acquisition order of different inflections and correctly predicting the substitution errors they make. Analyses show that the learning trajectory can be explained as a result of the gradual acquisition of inflection-meaning associations. Ours is the first computational model to provide an explanation for this acquisition trajectory in Spanish, and represents a theoretical advance more generally in explaining substitution errors in early morphology learning
Recommended from our members
Language Models as Informative Goal Priors in a Bayesian Theory of Mind
Bayesian models of theory of mind (ToM) have been successful in explaining how humans infer goals from the actions of other agents. However, they have typically been limited to small and fixed sets of possible goals specified in advance by the modeler, leaving open the question of how spontaneous goal inference occurs in rich and complex environments. To address this question, we posit that people are guided by informative, context-specific goal priors and proposals when making inferences about others. As proxies for these informed priors, we make use of context-conditioned large language models (LLMs) and integrate them into a Bayesian inverse planning framework. We find that LLMs can serve as usefully informative priors and proposals over goals compared to a structural baseline prior, allowing them to be used as models of the learned statistical knowledge that humans bring to bear in their inferences about others' goals