140 research outputs found

    Effect of Geometric Complexity on Intuitive Model Selection

    Get PDF
    Occam’s razor is the principle stating that, all else being equal, simpler explanations for a set of observations are to be preferred to more complex ones. This idea can be made precise in the context of statistical inference, where the same quantitative notion of complexity of a statistical model emerges naturally from different approaches based on Bayesian model selection and information theory. The broad applicability of this mathematical formulation suggests a normative model of decision-making under uncertainty: complex explanations should be penalized according to this common measure of complexity. However, little is known about if and how humans intuitively quantify the relative complexity of competing interpretations of noisy data. Here we measure the sensitivity of naive human subjects to statistical model complexity. Our data show that human subjects bias their decisions in favor of simple explanations based not only on the dimensionality of the alternatives (number of model parameters), but also on finer-grained aspects of their geometry. In particular, as predicted by the theory, models intuitively judged as more complex are not only those with more parameters, but also those with larger volume and prominent curvature or boundaries. Our results imply that principled notions of statistical model complexity have direct quantitative relevance to human decision-making

    Normative Evidence Accumulation in Unpredictable Environments

    Get PDF
    In our dynamic world, decisions about noisy stimuli can require temporal accumulation of evidence to identify steady signals, differentiation to detect unpredictable changes in those signals, or both. Normative models can account for learning in these environments but have not yet been applied to faster decision processes. We present a novel, normative formulation of adaptive learning models that forms decisions by acting as a leaky accumulator with non-absorbing bounds. These dynamics, derived for both discrete and continuous cases, depend on the expected rate of change of the statistics of the evidence and balance signal identification and change detection. We found that, for two different tasks, human subjects learned these expectations, albeit imperfectly, then used them to make decisions in accordance with the normative model. The results represent a unified, empirically supported account of decision-making in unpredictable environments that provides new insights into the expectation-driven dynamics of the underlying neural signals

    Bayesian Online Learning of the Hazard Rate in Change-Point Problems

    Get PDF
    Change-point models are generative models of time-varying data in which the underlying generative parameters undergo discontinuous changes at different points in time known as change points. Changepoints often represent important events in the underlying processes, like a change in brain state reflected in EEG data or a change in the value of a company reflected in its stock price. However, change-points can be difficult to identify in noisy data streams. Previous attempts to identify change-points online using Bayesian inference relied on specifying in advance the rate at which they occur, called the hazard rate (h). This approach leads to predictions that can depend strongly on the choice of h and is unable to deal optimally with systems in which h is not constant in time. In this letter, we overcome these limitations by developing a hierarchical extension to earlier models. This approach allows h itself to be inferred from the data, which in turn helps to identify when change-points occur. We show that our approach can effectively identify change-points in both toy and real data sets with complex hazard rates and how it can be used as an ideal-observermodel for human and animal behavior when faced with rapidly changing inputs

    How Occam’s Razor Guides Human Inference

    Get PDF
    Occam’s razor is the principle stating that, all else being equal, simpler explanations for a set of observations are preferred over more complex ones. This idea is central to multiple formal theories of statistical model selection and is posited to play a role in human perception and decision-making, but a general, quantitative account of the specific nature and impact of complexity on human decision-making is still missing. Here we use preregistered experiments to show that, when faced with uncertain evidence, human subjects bias their decisions in favor of simpler explanations in a way that can be quantified precisely using the framework of Bayesian model selection. Specifically, these biases, which were also exhibited by artificial neural networks trained to optimize performance on comparable tasks, reflect an aversion to complex explanations (statistical models of data) that depends on specific geometrical features of those models, namely their dimensionality, boundaries, volume, and curvature. Moreover, the simplicity bias persists for human, but not artificial, subjects even for tasks for which the bias is maladaptive and can lower overall performance. Taken together, our results imply that principled notions of statistical model complexity have direct, quantitative relevance to human and machine decision-making and establish a new understanding of the computational foundations, and behavioral benefits, of our predilection for inferring simplicity in the latent properties of our complex world

    Functionally Dissociable Influences on Learning Rate in a Dynamic Environment

    Get PDF
    Maintaining accurate beliefs in a changing environment requires dynamically adapting the rate at which one learns from new experiences. Beliefs should be stable in the face of noisy data but malleable in periods of change or uncertainty. Here we used computational modeling, psychophysics, and fMRI to show that adaptive learning is not a unitary phenomenon in the brain. Rather, it can be decomposed into three computationally and neuroanatomically distinct factors that were evident in human subjects performing a spatial-prediction task: (1) surprise-driven belief updating, related to BOLD activity in visual cortex; (2) uncertainty-driven belief updating, related to anterior prefrontal and parietal activity; and (3) reward-driven belief updating, a context-inappropriate behavioral tendency related to activity in ventral striatum. These distinct factors converged in a core system governing adaptive learning. This system, which included dorsomedial frontal cortex, responded to all three factors and predicted belief updating both across trials and across individuals

    Age differences in learning emerge from an insufficient representation of uncertainty in older adults

    Get PDF
    Healthy aging can lead to impairments in learning that affect many laboratory and real-life tasks. These tasks often involve the acquisition of dynamic contingencies, which requires adjusting the rate of learning to environmental statistics. For example, learning rate should increase when expectations are uncertain (uncertainty), outcomes are surprising (surprise) or contingencies are more likely to change (hazard rate). In this study, we combine computational modelling with an age-comparative behavioural study to test whether age-related learning deficits emerge from a failure to optimize learning according to the three factors mentioned above. Our results suggest that learning deficits observed in healthy older adults are driven by a diminished capacity to represent and use uncertainty to guide learning. These findings provide insight into age-related cognitive changes and demonstrate how learning deficits can emerge from a failure to accurately assess how much should be learned
    • …
    corecore