12 research outputs found

    Representational principles of function generalization

    Get PDF
    Generalization is at the core of human intelligence. When the relationship between continuous-valued data is generalized, generalization amounts to function learning. Function learning is important for understanding human cognition, as many everyday tasks and problems involve learning how quantities relate and subsequently using this knowledge to predict novel relationships. While function learning has been studied in psychology since the early 1960s, this thesis argues that questions regarding representational characteristics have not been adequately addressed in previous research. Previous accounts of function learning have often proposed one-size-fits-all models that excel at capturing how participants learn and extrapolate. In these models, learning amounts to learning the details of the presented patterns. Instead, this thesis presents computational and empirical results arguing that participants often learn abstract features of the data, such as the type of function or the variability of features of the function, instead of the details of the function. While previous work has emphasized domain-general inductive biases and learning rates, I propose that these biases are more flexible and adaptive than previously suggested. Given contextual information that sequential tasks share the same structure, participants can transfer knowledge from previous training to inform their generalizations. Furthermore, this thesis argues that function representations can be composed to form more complex hypotheses, and humans are perceptive to, and sometimes generalize according to these compositional features. Previous accounts of function learning had to postulate a fixed set of candidate functions that form a partic ipants’ hypothesis space, which ultimately struggled to account for the variety of extrapolations people can produce. In contrast, this thesis’s results suggest that a small set of broadly applicable functions, in combination with compositional principles, can produce flexible and productive generalization

    Probabilistic biases meet the Bayesian brain

    Get PDF
    Bayesian cognitive science sees the mind as a spectacular probabilistic inference machine. But Judgment and Decision Making research has spent half a century uncovering how dramatically and systematically people depart from rational norms. This paper outlines recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities; but approximates probabilistic calculations through drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, and offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment

    Explaining the flaws in human random generation as local sampling with momentum

    Get PDF
    In many tasks, human behavior is far noisier than is optimal. Yet when asked to behave randomly, people are typically too predictable. We argue that these apparently contrasting observations have the same origin: the operation of a general-purpose local sampling algorithm for probabilistic inference. This account makes distinctive predictions regarding random sequence generation, not predicted by previous accounts—which suggests that randomness is produced by inhibition of habitual behavior, striving for unpredictability. We verify these predictions in two experiments: people show the same deviations from randomness when randomly generating from non-uniform or recently-learned distributions. In addition, our data show a novel signature behavior, that people’s sequences have too few changes of trajectory, which argues against the specific local sampling algorithms that have been proposed in past work with other tasks. Using computational modeling, we show that local sampling where direction is maintained across trials best explains our data, which suggests it may be used in other tasks too. While local sampling has previously explained why people are unpredictable in standard cognitive tasks, here it also explains why human random sequences are not unpredictable enough

    Large Language Models are biased to overestimate profoundness

    Full text link
    Recent advancements in natural language processing by large language models (LLMs), such as GPT-4, have been suggested to approach Artificial General Intelligence. And yet, it is still under dispute whether LLMs possess similar reasoning abilities to humans. This study evaluates GPT-4 and various other LLMs in judging the profoundness of mundane, motivational, and pseudo-profound statements. We found a significant statement-to-statement correlation between the LLMs and humans, irrespective of the type of statements and the prompting technique used. However, LLMs systematically overestimate the profoundness of nonsensical statements, with the exception of Tk-instruct, which uniquely underestimates the profoundness of statements. Only few-shot learning prompts, as opposed to chain-of-thought prompting, draw LLMs ratings closer to humans. Furthermore, this work provides insights into the potential biases induced by Reinforcement Learning from Human Feedback (RLHF), inducing an increase in the bias to overestimate the profoundness of statements.Comment: 5 pages, 3 figure

    Noise in cognition : bug or feature?

    Get PDF
    Noise in behavior is often viewed as a nuisance: while the mind aims to take the best possible action, it is let down by unreliability in the sensory and response systems. How researchers study cognition reflects this viewpoint – averaging over trials and participants to discover the deterministic relationships between experimental manipulations and their behavioral consequences, with noise represented as additive, often Gaussian, and independent. Yet a careful look at behavioral noise reveals rich structure that defies easy explanation. First, both perceptual and preferential judgments show that sensory and response noise may potentially only play minor roles, with most noise arising in the cognitive computations. Second, the functional form of the noise is both non-Gaussian and non-independent, with the distribution of noise being better characterized as heavy-tailed and as having substantial long-range autocorrelations. It is possible that this structure results from brains that are, for some reason, bedeviled by a fundamental design flaw, albeit one with intriguingly distinctive characteristics. Alternatively, noise might not be a bug but a feature: indeed, we suggest that noise is fundamental to how cognition works. Specifically, we propose that the brain approximates probabilistic inference with a local sampling algorithm, one that uses randomness to drive its exploration of alternative hypotheses. Reframing cognition in this way explains the rich structure of noise and leads to a surprising conclusion: that noise is not a symptom of cognitive malfunction but plays a central role in underpinning human intelligence

    Understanding the structure of cognitive noise

    Get PDF
    Human cognition is fundamentally noisy. While routinely regarded as a nuisance in experimental investigation, the few studies investigating properties of cognitive noise have found surprising structure. A first line of research has shown that inter-response-time distributions are heavy-tailed. That is, response times between subsequent trials usually change only a small amount, but with occasional large changes. A second, separate, line of research has found that participants’ estimates and response times both exhibit long-range autocorrelations (i.e., 1/f noise). Thus, each judgment and response time not only depends on its immediate predecessor but also on many previous responses. These two lines of research use different tasks and have distinct theoretical explanations: models that account for heavy-tailed response times do not predict 1/f autocorrelations and vice versa. Here, we find that 1/f noise and heavy-tailed response distributions co-occur in both types of tasks. We also show that a statistical sampling algorithm, developed to deal with patchy environments, generates both heavy-tailed distributions and 1/f noise, suggesting that cognitive noise may be a functional adaptation to dealing with a complex world
    corecore