640 research outputs found
Variability, negative evidence, and the acquisition of verb argument constructions
We present a hierarchical Bayesian framework for modeling the acquisition of verb argument constructions. It embodies a domain-general approach to learning higher-level knowledge in the form of inductive constraints (or overhypotheses), and has been used to explain other aspects of language development such as the shape bias in learning object names. Here, we demonstrate that the same model captures several phenomena in the acquisition of verb constructions. Our model, like adults in a series of artificial language learning experiments, makes inferences about the distributional statistics of verbs on several levels of abstraction simultaneously. It also produces the qualitative learning patterns displayed by children over the time course of acquisition. These results suggest that the patterns of generalization observed in both children and adults could emerge from basic assumptions about the nature of learning. They also provide an example of a broad class of computational approaches that can resolve Baker's Paradox
Higher order inference in verb argument structure acquisition
Successful language learning combines generalization and
the acquisition of lexical constraints. The conflict is particularly clear for verb argument structures, which may
generalize to new verbs (John gorped the ball to Bill ->John gorped Bill the ball), yet resist generalization with certain lexical items (John carried the ball to Bill -> *John carried Bill the ball). The resulting learnability “paradox” (Baker 1979) has received great attention in the acquisition literature.
Wonnacott, Newport & Tanenhaus 2008 demonstrated that adult learners acquire both general and verb-specific
patterns when acquiring an artificial language with two
competing argument structures, and that these same
constraints are reflected in real time processing. The current work follows up and extends this program of research in two new experiments. We demonstrate that the results are consistent with a hierarchical Bayesian model, originally developed by Kemp, Perfors & Tenebaum (2007) to capture the emergence of feature biases in word learning
Modeling Human Understanding of Complex Intentional Action with a Bayesian Nonparametric Subgoal Model
Most human behaviors consist of multiple parts, steps, or subtasks. These
structures guide our action planning and execution, but when we observe others,
the latent structure of their actions is typically unobservable, and must be
inferred in order to learn new skills by demonstration, or to assist others in
completing their tasks. For example, an assistant who has learned the subgoal
structure of a colleague's task can more rapidly recognize and support their
actions as they unfold. Here we model how humans infer subgoals from
observations of complex action sequences using a nonparametric Bayesian model,
which assumes that observed actions are generated by approximately rational
planning over unknown subgoal sequences. We test this model with a behavioral
experiment in which humans observed different series of goal-directed actions,
and inferred both the number and composition of the subgoal sequences
associated with each goal. The Bayesian model predicts human subgoal inferences
with high accuracy, and significantly better than several alternative models
and straightforward heuristics. Motivated by this result, we simulate how
learning and inference of subgoals can improve performance in an artificial
user assistance task. The Bayesian model learns the correct subgoals from fewer
observations, and better assists users by more rapidly and accurately inferring
the goal of their actions than alternative approaches.Comment: Accepted at AAAI 1
Recommended from our members
Language & Common Sense: Integrating across psychology, linguistics, and computer science
Recommended from our members
Understanding “almost”: Empirical and computational studies of near misses
When did something almost happen? In this paper, we in-vestigate what brings counterfactual worlds close. In Exper-iments 1 and 2, we find that participants’ judgments aboutwhether something almost happened are determined by thecausal proximity of the alternative outcome. Something almosthappened, when a small perturbation to the relevant causalevent would have been sufficient to bring it about. In contrastto previous work that has argued that prior expectations areneglected when judging the closeness of counterfactual worlds(Kahneman & Varey, 1990), we show in Experiment 3 thatparticipants are more likely to say something almost happenedwhen they did not expect it. Both prior expectations and causaldistance influence judgments of “almost”. In Experiment 4, weshow how both causal proximity and beliefs about what wouldhave happened in the absence of the cause jointly explain judg-ments of “almost caused” and “almost prevented”
- …