14 research outputs found

    LLMs don't know anything: reply to Yildirim and Paul

    Get PDF
    In their recent Opinion in TiCS, Yildirim and Paul propose that large language models (LLMs) have ‘instrumental knowledge’ and possibly the kind of ‘worldly’ knowledge that humans do. They suggest that the production of appropriate outputs by LLMs is evidence that LLMs infer ‘task structure’ that may reflect ‘causal abstractions of... entities and processes in the real world.' While we agree that LLMs are impressive and potentially interesting for cognitive science, we resist this project on two grounds. First, it casts LLMs as agents rather than as models. Second, it suggests that causal understanding could be acquired from the capacity for mere prediction

    Chimpanzees prepare for alternative possible outcomes

    No full text
    When facing uncertainty, humans often build mental models of alternative outcomes. Considering diverging scenarios allows agents to respond adaptively to different actual worlds by developing contingency plans (covering one's bases). In a pre-registered experiment, we tested whether chimpanzees (Pan troglodytes) prepare for two mutually exclusive possibilities. Chimpanzees could access two pieces of food, but only if they successfully protected them from a human competitor. In one condition, chimpanzees could be certain about which piece of food the human experimenter would attempt to steal. In a second condition, either one of the food rewards was a potential target of the competitor. We found that chimpanzees were significantly more likely to protect both pieces of food in the second relative to the first condition, raising the possibility that chimpanzees represent and prepare effectively for different possible worlds
    corecore