5 research outputs found

    Different Physical Intuitions Exist Between Tasks, Not Domains

    Get PDF
    Abstract Does human behavior exploit deep and accurate knowledge about how the world works, or does it rely on shallow and often inaccurate heuristics? This fundamental question is rooted in a classic dichotomy in psychology: human intuitions about even simple scenarios can be poor, yet their behaviors can exceed the capabilities of even the most advanced machines. One domain where such a dichotomy has classically been demonstrated is intuitive physics. Here we demonstrate that this dichotomy is rooted in how physical knowledge is measured: extrapolation of ballistic motion is idiosyncratic and erroneous when people draw the trajectories but consistent with accurate physical inferences under uncertainty when people use the same trajectories to catch a ball or release it to hit a target. Our results suggest that the contrast between rich and calibrated versus poor and inaccurate patterns of physical reasoning exists as a result of using different systems of knowledge across tasks, rather than being driven solely by a universal system of knowledge that is inconsistent across physical principles

    A phone in a basket looks like a knife in a cup: Role-filler independence in visual processing

    Get PDF
    When a piece of fruit is in a bowl, and the bowl is on a table, we appreciate not only the individual objects and their features, but also the relations containment and support, which abstract away from the particular objects involved. Independent representation of roles (e.g., containers vs. supporters) and “fillers” of those roles (e.g., bowls vs. cups, tables vs. chairs) is a core principle of language and higherlevel reasoning. But does such role-filler independence also arise in automatic visual processing? Here, we show that it does, by exploring a surprising error that such independence can produce. In four experiments, participants saw a stream of images containing different objects arranged in forcedynamic relations — e.g., a phone contained in a basket, a marker resting on a garbage can, or a knife sitting in a cup. Participants had to respond to a single target image (e.g., a phone in a basket) within a stream of distractors presented under time constraints. Surprisingly, even though participants completed this task quickly and accurately, they false-alarmed more often to images matching the target’s relational category than to those that did not — even when those images involved completely different objects. In other words, participants searching for a phone in a basket were more likely to mistakenly respond to a knife in a cup than to a marker on a garbage can. Follow-up experiments ruled out strategic responses and also controlled for various confounding image features. We suggest that visual processing represents relations abstractly, in ways that separate roles from fillers
    corecore