26 research outputs found

    On the Origins of Suboptimality in Human Probabilistic Inference

    Get PDF
    Humans have been shown to combine noisy sensory information with previous experience (priors), in qualitative and sometimes quantitative agreement with the statistically-optimal predictions of Bayesian integration. However, when the prior distribution becomes more complex than a simple Gaussian, such as skewed or bimodal, training takes much longer and performance appears suboptimal. It is unclear whether such suboptimality arises from an imprecise internal representation of the complex prior, or from additional constraints in performing probabilistic computations on complex distributions, even when accurately represented. Here we probe the sources of suboptimality in probabilistic inference using a novel estimation task in which subjects are exposed to an explicitly provided distribution, thereby removing the need to remember the prior. Subjects had to estimate the location of a target given a noisy cue and a visual representation of the prior probability density over locations, which changed on each trial. Different classes of priors were examined (Gaussian, unimodal, bimodal). Subjects' performance was in qualitative agreement with the predictions of Bayesian Decision Theory although generally suboptimal. The degree of suboptimality was modulated by statistical features of the priors but was largely independent of the class of the prior and level of noise in the cue, suggesting that suboptimality in dealing with complex statistical features, such as bimodality, may be due to a problem of acquiring the priors rather than computing with them. We performed a factorial model comparison across a large set of Bayesian observer models to identify additional sources of noise and suboptimality. Our analysis rejects several models of stochastic behavior, including probability matching and sample-averaging strategies. Instead we show that subjects' response variability was mainly driven by a combination of a noisy estimation of the parameters of the priors, and by variability in the decision process, which we represent as a noisy or stochastic posterior

    Decision making in dynamic and interactive environments based on cognitive hierarchy theory, Bayesian inference, and predictive control

    Full text link
    In this paper, we describe an integrated framework for autonomous decision making in a dynamic and interactive environment. We model the interactions between the ego agent and its operating environment as a two-player dynamic game, and integrate cognitive behavioral models, Bayesian inference, and receding-horizon optimal control to define a dynamically-evolving decision strategy for the ego agent. Simulation examples representing autonomous vehicle control in three traffic scenarios where the autonomous ego vehicle interacts with a human-driven vehicle are reported.Comment: 2019 IEEE Conference on Decision and Contro

    Generalisation of prior information for rapid Bayesian time estimation

    Get PDF
    To enable effective interaction with the environment, the brain combines noisy sensory information with expectations based on prior experience. There is ample evidence showing that humans can learn statistical regularities in sensory input and exploit this knowledge to improve perceptual decisions and actions. However, fundamental questions remain regarding how priors are learned and how they generalise to different sensory and behavioural contexts. In principle, maintaining a large set of highly specific priors may be inefficient and restrict the speed at which expectations can be formed and updated in response to changes in the environment. On the other hand, priors formed by generalising across varying contexts may not be accurate. Here we exploit rapidly induced contextual biases in duration reproduction to reveal how these competing demands are resolved during the early stages of prior acquisition. We show that observers initially form a single prior by generalising across duration distributions coupled with distinct sensory signals. In contrast, they form multiple priors if distributions are coupled with distinct motor outputs. Together, our findings suggest that rapid prior acquisition is facilitated by generalisation across experiences of different sensory inputs, but organised according to how that sensory information is acted upon

    The energy metabolic footprint of predictive processing in the human brain

    Get PDF
    To maximize our chances of survival and procreation, we need to process our environment in a highly sophisticated and accurate manner. In their limit, these two demands are mutually exclusive: While better sound localization, quicker reflexes or more accurate vision could improve survivability, the necessary energy consumption might not be sustainable. Luckily, our sensory systems strike an impressive balance between performance and energetic cost. In a both active and passive process, we learn about the rules that determine our experience and use them to form expectations. Efficient brain activity is then achieved by limiting the forward transmission of signals to deviations from what we predicted. In the visual domain, this means that our perception is dominated by our expectations when we are in a familiar environment. Research in cognitive neuroscience has shown that expected input elicits weaker brain activity than surprising input, without any behavioral disadvantages. However, knowledge about associated energetic efficiency is limited by three gaps in the current literature. First, conventional imaging techniques do not provide direct measurements of energy metabolism. Second, previous research has focused on localizing areas of maximal effect, potentially missing weaker, but more widespread patterns. Third, our knowledge about the world is imperfect, leading to uncertain expectations. This has rarely been accounted for. Neuronal activity is fueled by ATP, most of which is produced with chemical reactions that need oxygen. In the present work, I assessed energy metabolism with a novel imaging method that measures the rate of oxygen consumption across all parts of the brain. I used an experimental design during which participants saw visual object sequences that were either predictable, random, or surprising. Behavioral tests indicated that predictable sequences were learned without any feedback which resulted in anticipation of upcoming objects. I further found that participants varied in the confidence of their expectations. This had a major impact on oxygen consumption when viewing predictable sequences: The lowest energy usage was found for high levels of confidence. This effect was not limited to sensory regions but extended across large parts of the brain. Interestingly, my results suggest that confidence led to energy savings even when the visual input was objectively random. In conclusion, this work provides the first evidence that our expectations are a major promoter of efficient processing, which is crucial for any organism with limited energy availability

    A nonlinear updating algorithm captures suboptimal inference in the presence of signal-dependent noise

    Get PDF
    Bayesian models have advanced the idea that humans combine prior beliefs and sensory observations to optimize behavior. How the brain implements Bayes-optimal inference, however, remains poorly understood. Simple behavioral tasks suggest that the brain can flexibly represent probability distributions. An alternative view is that the brain relies on simple algorithms that can implement Bayes-optimal behavior only when the computational demands are low. To distinguish between these alternatives, we devised a task in which Bayes-optimal performance could not be matched by simple algorithms. We asked subjects to estimate and reproduce a time interval by combining prior information with one or two sequential measurements. In the domain of time, measurement noise increases with duration. This property takes the integration of multiple measurements beyond the reach of simple algorithms. We found that subjects were able to update their estimates using the second measurement but their performance was suboptimal, suggesting that they were unable to update full probability distributions. Instead, subjects’ behavior was consistent with an algorithm that predicts upcoming sensory signals, and applies a nonlinear function to errors in prediction to update estimates. These results indicate that the inference strategies employed by humans may deviate from Bayes-optimal integration when the computational demands are high

    Discovering common hidden causes in sequences of events

    Get PDF
    corecore