8 research outputs found

    Changing minds: Children's inferences about third party belief revision

    Get PDF
    By the age of 5, children explicitly represent that agents can have both true and false beliefs based on epistemic access to information (e.g., Wellman, Cross, & Watson, 2001). Children also begin to understand that agents can view identical evidence and draw different inferences from it (e.g., Carpendale & Chandler, 1996). However, much less is known about when, and under what conditions, children expect other agents to change their minds. Here, inspired by formal ideal observer models of learning, we investigate children's expectations of the dynamics that underlie third parties' belief revision. We introduce an agent who has prior beliefs about the location of a population of toys and then observes evidence that, from an ideal observer perspective, either does, or does not justify revising those beliefs. We show that children's inferences on behalf of third parties are consistent with the ideal observer perspective, but not with a number of alternative possibilities, including that children expect other agents to be influenced only by their prior beliefs, only by the sampling process, or only by the observed data. Rather, children integrate all three factors in determining how and when agents will update their beliefs from evidence.National Science Foundation (U.S.). Division of Computing and Communication Foundations (1231216)National Science Foundation (U.S.). Division of Research on Learning in Formal and Informal Settings (0744213)National Science Foundation (U.S.) (STC Center for Brains, Minds and Machines Award CCF-1231216)National Science Foundation (U.S.) (0744213

    Changing minds: Children’s inferences about third party belief revision

    Full text link
    By the age of 5, children explicitly represent that agents can have both true and false beliefs based on epistemic access to information (e.g., Wellman, Cross, & Watson, 2001). Children also begin to understand that agents can view identical evidence and draw different inferences from it (e.g., Carpendale & Chandler, 1996). However, much less is known about when, and under what conditions, children expect other agents to change their minds. Here, inspired by formal ideal observer models of learning, we investigate children’s expectations of the dynamics that underlie third parties’ belief revision. We introduce an agent who has prior beliefs about the location of a population of toys and then observes evidence that, from an ideal observer perspective, either does, or does not justify revising those beliefs. We show that children’s inferences on behalf of third parties are consistent with the ideal observer perspective, but not with a number of alternative possibilities, including that children expect other agents to be influenced only by their prior beliefs, only by the sampling process, or only by the observed data. Rather, children integrate all three factors in determining how and when agents will update their beliefs from evidence.Young children use others’ prior beliefs and data to predict when third parties will retain their beliefs and when they will change their minds.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/142970/1/desc12553_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/142970/2/desc12553.pd

    You took the words right out of my mouth:Dual-fMRI reveals intra- and inter-personal neural processes supporting verbal interaction.

    Get PDF
    Verbal communication relies heavily upon mutual understanding, or common ground. Inferring the intentional states of our interaction partners is crucial in achieving this, and social neuroscience has begun elucidating the intra- and inter-personal neural processes supporting such inferences. Typically, however, neuroscientific paradigms lack the reciprocal to-and-fro characteristic of social communication, offering little insight into the way these processes operate online during real-world interaction. In the present study, we overcame this by developing a “hyperscanning” paradigm in which pairs of interactants could communicate verbally with one another in a joint-action task whilst both undergoing functional magnetic resonance imaging simultaneously. Successful performance on this task required both interlocutors to predict their partner's upcoming utterance in order to converge on the same word as each other over recursive exchanges, based only on one another's prior verbal expressions. By applying various levels of analysis to behavioural and neuroimaging data acquired from 20 dyads, three principal findings emerged: First, interlocutors converged frequently within the same semantic space, suggesting that mutual understanding had been established. Second, assessing the brain responses of each interlocutor as they planned their upcoming utterances on the basis of their co-player's previous word revealed the engagement of the temporo-parietal junctional (TPJ), precuneus and dorso-lateral pre-frontal cortex. Moreover, responses in the precuneus were modulated positively by the degree of semantic convergence achieved on each round. Second, effective connectivity among these regions indicates the crucial role of the right TPJ in this process, consistent with the Nexus model. Third, neural signals within certain nodes of this network became aligned between interacting interlocutors. We suggest this reflects an interpersonal neural process through which interactants infer and align to one another's intentional states whilst they establish a common ground

    Attributed Intelligence

    Get PDF
    Human beings quickly and confidently attribute more or less intelligence to one another. What is meant by intelligence when they do so? And what are the surface features of human behaviour that determine their judgements? Because the judges of success or failure in the quest for `artificial intelligence' will be human, the answers to such questions are an essential part of cognitive science. This thesis studies such questions in the context of a maze world, complex enough to require non-trivial answers, and simple enough to analyse the answers in term of decision-making algorithms. According to Theory-theory, humans comprehend the actions of themselves and of others in terms of beliefs, desires and goals, following rational principles of utility. If so, attributing intelligence may result from an evaluation the agent's efficiency -- how closely its behaviour approximates the expected rational course of action. Alternatively, attributed intelligence could result from observing outcomes: billionaires and presidents are, by definition, intelligent. I applied Bayesian models of planning under uncertainty to data from five behavioural experiments. The results show that while most humans attribute intelligence to efficiency, a minority attributes intelligence to outcome. Understanding of differences in attributed intelligence comes from a study how people plan. Most participants can optimally plan 1-5 decisions in advance. Individually they vary in sensitivity to decision value and in planning depth. Comparing planning performance and attributed intelligence shows that observers' ability to attribute intelligence depends on their ability to plan. People attribute intelligence to efficiency in proportion to their planning ability. The less skilled planners are more likely to attribute intelligence to outcome. Moreover, model-based metrics of planning performance correlate with independent measures of cognitive performance, such as the Cognitive Reflection Test and pupil size. Eyetracking analysis of spatial planning in real-time shows that participants who score highly on independent measures of cognitive ability also plan further ahead. Taken together, these results converge on a theory of attributed intelligence as an evaluation of how efficiently an agent plans, such that depends on the observer's cognitive abilities to carry out the evaluation

    Learning what is where from social observations

    No full text
    Observing the actions of other people allows us to learn not only about their mental states, but also about hidden aspects of a shared environmental situation – things we cannot see, but they can, and that influence their behavior in predictable ways. This paper presents a computational model of how people can learn about the world through these social inferences, supported by the same Theory of Mind (ToM) that enables representing and reasoning about an agent’s mental states such as beliefs, desires, and intentions. The model is an extension of the Bayesian Theory of Mind (BToM) model of Baker et al. (2011), which treats observed intentional actions as the output of an approximately rational planning process and then reasons backwards to infer the most likely inputs to the agent’s planner – in this case, the locations and states of utility sources (potential goal objects) in the environment. We conducted a large-scale experiment comparing the world-state inferences of the BToM model and those of human subjects, given observations of agents moving along various trajectories in simple spatial environments. The model quantitatively predicts subjects’ graded beliefs about possible world states with high accuracy – and substantially better than a non-mentalistic feature-based model with many more free parameters. These results show the power of social learning for acquiring surprisingly fine-grained knowledge about the world
    corecore