580 research outputs found

    Variability, negative evidence, and the acquisition of verb argument constructions

    Get PDF
    We present a hierarchical Bayesian framework for modeling the acquisition of verb argument constructions. It embodies a domain-general approach to learning higher-level knowledge in the form of inductive constraints (or overhypotheses), and has been used to explain other aspects of language development such as the shape bias in learning object names. Here, we demonstrate that the same model captures several phenomena in the acquisition of verb constructions. Our model, like adults in a series of artificial language learning experiments, makes inferences about the distributional statistics of verbs on several levels of abstraction simultaneously. It also produces the qualitative learning patterns displayed by children over the time course of acquisition. These results suggest that the patterns of generalization observed in both children and adults could emerge from basic assumptions about the nature of learning. They also provide an example of a broad class of computational approaches that can resolve Baker's Paradox

    Higher order inference in verb argument structure acquisition

    Get PDF
    Successful language learning combines generalization and the acquisition of lexical constraints. The conflict is particularly clear for verb argument structures, which may generalize to new verbs (John gorped the ball to Bill ->John gorped Bill the ball), yet resist generalization with certain lexical items (John carried the ball to Bill -> *John carried Bill the ball). The resulting learnability “paradox” (Baker 1979) has received great attention in the acquisition literature. Wonnacott, Newport & Tanenhaus 2008 demonstrated that adult learners acquire both general and verb-specific patterns when acquiring an artificial language with two competing argument structures, and that these same constraints are reflected in real time processing. The current work follows up and extends this program of research in two new experiments. We demonstrate that the results are consistent with a hierarchical Bayesian model, originally developed by Kemp, Perfors & Tenebaum (2007) to capture the emergence of feature biases in word learning

    Modeling Human Understanding of Complex Intentional Action with a Bayesian Nonparametric Subgoal Model

    Full text link
    Most human behaviors consist of multiple parts, steps, or subtasks. These structures guide our action planning and execution, but when we observe others, the latent structure of their actions is typically unobservable, and must be inferred in order to learn new skills by demonstration, or to assist others in completing their tasks. For example, an assistant who has learned the subgoal structure of a colleague's task can more rapidly recognize and support their actions as they unfold. Here we model how humans infer subgoals from observations of complex action sequences using a nonparametric Bayesian model, which assumes that observed actions are generated by approximately rational planning over unknown subgoal sequences. We test this model with a behavioral experiment in which humans observed different series of goal-directed actions, and inferred both the number and composition of the subgoal sequences associated with each goal. The Bayesian model predicts human subgoal inferences with high accuracy, and significantly better than several alternative models and straightforward heuristics. Motivated by this result, we simulate how learning and inference of subgoals can improve performance in an artificial user assistance task. The Bayesian model learns the correct subgoals from fewer observations, and better assists users by more rapidly and accurately inferring the goal of their actions than alternative approaches.Comment: Accepted at AAAI 1

    Physical Primitive Decomposition

    Full text link
    Objects are made of parts, each with distinct geometry, physics, functionality, and affordances. Developing such a distributed, physical, interpretable representation of objects will facilitate intelligent agents to better explore and interact with the world. In this paper, we study physical primitive decomposition---understanding an object through its components, each with physical and geometric attributes. As annotated data for object parts and physics are rare, we propose a novel formulation that learns physical primitives by explaining both an object's appearance and its behaviors in physical events. Our model performs well on block towers and tools in both synthetic and real scenarios; we also demonstrate that visual and physical observations often provide complementary signals. We further present ablation and behavioral studies to better understand our model and contrast it with human performance.Comment: ECCV 2018. Project page: http://ppd.csail.mit.edu