12 research outputs found

    Learning a theory of causality

    Get PDF
    The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework and the role for innate structure. We focus on knowledge about causality, seen as a domain-general intuitive theory, and ask whether this knowledge can be learned from co-occurrence of events. We begin by phrasing the causal Bayes nets theory of causality and a range of alternatives in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learnedā€”an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence and find that a collection of simple perceptual input analyzers can help to bootstrap abstract knowledge. Together, these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion.James S. McDonnell Foundation (Causal Learning Collaborative Initiative)United States. Office of Naval Research (Grant N00014-09-0124)United States. Air Force Office of Scientific Research (Grant FA9550-07-1-0075)United States. Army Research Office (Grant W911NF-08-1-0242

    Theory Acquisition as Stochastic Search

    Get PDF
    We present an algorithmic model for the development of childrenā€™s intuitive theories within a hierarchical Bayesian framework, where theories are described as sets of logical laws generated by a probabilistic context-free grammar. Our algorithm performs stochastic search at two levels of abstraction ā€“ an outer loop in the space of theories, and an inner loop in the space of explanations or models generated by each theory given a particular dataset ā€“ in order to discover the theory that best explains the observed data. We show that this model is capable of learning correct theories in several everyday domains, and discuss the dynamics of learning in the context of childrenā€™s cognitive development.United States. Air Force Office of Scientific Research (AFOSR (FA9550-07-1-0075)United States. Office of Naval Research (ONR (N00014-09-0124)James S. McDonnell Foundation (Causal Learning Collaborative Initiative

    Ten-month-old infants infer the value of goals from the costs of actions

    Get PDF
    Infants understand that people pursue goals, but how do they learn which goals people prefer? We tested whether infants solve this problem by inverting a mental model of action planning, trading off the costs of acting against the rewards actions bring. After seeing an agent attain two goals equally often at varying costs, infants expected the agent to prefer the goal it attained through costlier actions. These expectations held across three experiments that conveyed cost through different physical path features (height, width, and incline angle), suggesting that an abstract variableā€”such as ā€œforce,ā€ ā€œwork,ā€ or ā€œeffortā€ā€”supported infantsā€™ inferences. We modeled infantsā€™ expectations as Bayesian inferences over utility-theoretic calculations, providing a bridge to recent quantitative accounts of action understanding in older children and adults

    A Compositional Object-Based Approach to Learning Physical Dynamics

    Get PDF
    We present the Neural Physics Engine (NPE), an object-based neural network architecture for learning predictive models of intuitive physics. We propose a factorization of a physical scene into composable object-based representations and also the NPE architecture whose compositional structure factorizes object dynamics into pairwise interactions. Our approach draws on the strengths of both symbolic and neural approaches: like a symbolic physics engine, the NPE is endowed with generic notions of objects and their interactions, but as a neural network it can also be trained via stochastic gradient descent to adapt to specific object properties and dynamics of different worlds. We evaluate the efficacy of our approach on simple rigid body dynamics in two-dimensional worlds. By comparing to less structured architectures, we show that our model's compositional representation of the structure in physical interactions improves its ability to predict movement, generalize to different numbers of objects, and infer latent properties of objects such as mass.National Science Foundation (U.S.) (Award CCF-1231216)United States. Office of Naval Research (Grant N00014-16-1-2007

    The mentalistic basis of core social cognition: experiments in preverbal infants and a computational model

    Get PDF
    Evaluating individuals based on their pro- and anti-social behaviors is fundamental to successful human interaction. Recent research suggests that even preverbal infants engage in social evaluation; however, it remains an open question whether infants' judgments are driven uniquely by an analysis of the mental states that motivate others' helpful and unhelpful actions, or whether non-mentalistic inferences are at play. Here we present evidence from 10-month-olds, motivated and supported by a Bayesian computational model, for mentalistic social evaluation in the first year of life. A video abstract of this article can be viewed at http://youtu.be/rD_Ry5oqCY

    Help or hinder: Bayesian models of social goal inference

    Get PDF
    Everyday social interactions are heavily influenced by our snap judgments about othersā€™ goals. Even young infants can infer the goals of intentional agents from observing how they interact with objects and other agents in their environment: e.g., that one agent is ā€˜helpingā€™ or ā€˜hinderingā€™ anotherā€™s attempt to get up a hill or open a box. We propose a model for how people can infer these social goals from actions, based on inverse planning in multiagent Markov decision problems (MDPs). The model infers the goal most likely to be driving an agentā€™s behavior by assuming the agent acts approximately rationally given environmental constraints and its model of other agents present. We also present behavioral evidence in support of this model over a simpler, perceptual cue-based alternative.United States. Army Research Office (ARO MURI grant W911NF-08-1-0242)United States. Air Force Office of Scientific Research (MURI grant FA9550-07-1-0075)National Science Foundation (U.S.) (Graduate Research Fellowship)James S. McDonnell Foundation (Collaborative Interdisciplinary Grant on Causal Reasoning

    Developmental and computational perspectives on infant social cognition

    Get PDF
    Adults effortlessly and automatically infer complex pat- terns of goals, beliefs, and other mental states as the causes of othersā€™ actions. Yet before the last decade little was known about the developmental origins of these abilities in early infancy. Our understanding of infant social cognition has now improved dramatically: even preverbal infants appear to perceive goals, preferences (Kushnir, Xu, & Wellman, in press), and even beliefs from sparse observations of inten- tional agentsā€™ behavior. Furthermore, they use these infer- ences to predict othersā€™ behavior in novel contexts and to make social evaluations (Hamlin, Wynn, & Bloom, 2007). Keywords: Social cognition; Cognitive Development; Computational Modeling; Theory of Min

    On the nature and origin of intuitive theories : learning, physics and psychology

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2015.Cataloged from PDF version of thesis.Includes bibliographical references (pages 221-236).This thesis develops formal computational models of intuitive theories, in particular intuitive physics and intuitive psychology, which form the basis of commonsense reasoning. The overarching formal framework is that of hierarchical Bayesian models, which see the mind as having domain-specific hypotheses about how the world works. The work first extends models of intuitive psychology to include higher-level social utilities, arguing against a pure 'classifier' view. Second, the work extends models of intuitive physics by introducing a ontological hierarchy of physics concepts, and examining how well people can reason about novel dynamic displays. I then examine the question of learning intuitive theories in general, arguing that an algorithmic approach based on stochastic search can address several puzzles of learning, including the 'chicken and egg' problem of concept learning. Finally, I argue the need for a joint theory-space for reasoning about intuitive physics and intuitive psychology, and provide such a simplified space in the form of a generative model for a novel domain called Lineland. Taken together, these results forge links between formal modeling, intuitive theories, and cognitive development.by Tomer David Ullman.Ph. D

    Theory learning as stochastic search in the language of thought

    No full text
    We present an algorithmic model for the development of children's intuitive theories within a hierarchical Bayesian framework, where theories are described as sets of logical laws generated by a probabilistic context-free grammar. We contrast our approach with connectionist and other emergentist approaches to modeling cognitive development. While their subsymbolic representations provide a smooth error surface that supports efficient gradient-based learning, our symbolic representations are better suited to capturing children's intuitive theories but give rise to a harder learning problem, which can only be solved by exploratory search. Our algorithm attempts to discover the theory that best explains a set of observed data by performing stochastic search at two levels of abstraction: an outer loop in the space of theories and an inner loop in the space of explanations or models generated by each theory given a particular dataset. We show that this stochastic search is capable of learning appropriate theories in several everyday domains and discuss its dynamics in the context of empirical studies of children's learning.James S. McDonnell Foundation. Causal Learning CollaborativeUnited States. Office of Naval Research (N00014-09-0124)United States. Army Research Office (W911NF-08-1-0242)National Science Foundation (U.S.). Graduate Research Fellowshi

    Wins above replacement: Responsibility attributions as counterfactual replacements

    No full text
    In order to be held responsible, a personā€™s action has to have made some sort of difference to the outcome. In this pa-per, we propose a counterfactual replacement model accord-ing to which people attribute responsibility by comparing their prior expectation about how an agent was going to act in a given situation, with their posterior expectation after having observed the agentā€™s action. The model predicts blame if the posterior expectation is worse than the prior expectation and credit if it is better. In a novel experiment, we manipulate peo-pleā€™s prior expectations by changing the framing of a struc-turally isomorphic task. As predicted by our counterfactual replacement model, peopleā€™s prior expectations significantly influenced their responsibility attributions. We also show how our model can capture Johnson and Ripsā€™s (2013) findings that an agent is attributed less responsibility for bringing about a positive outcome when their action was suboptimal rather than optimal
    corecore