26,455 research outputs found

    Active inference, evidence accumulation, and the urn task

    Get PDF
    Deciding how much evidence to accumulate before making a decision is a problem we and other animals often face, but one that is not completely understood. This issue is particularly important because a tendency to sample less information (often known as reflection impulsivity) is a feature in several psychopathologies, such as psychosis. A formal understanding of information sampling may therefore clarify the computational anatomy of psychopathology. In this theoretical letter, we consider evidence accumulation in terms of active (Bayesian) inference using a generic model of Markov decision processes. Here, agents are equipped with beliefs about their own behavior--in this case, that they will make informed decisions. Normative decision making is then modeled using variational Bayes to minimize surprise about choice outcomes. Under this scheme, different facets of belief updating map naturally onto the functional anatomy of the brain (at least at a heuristic level). Of particular interest is the key role played by the expected precision of beliefs about control, which we have previously suggested may be encoded by dopaminergic neurons in the midbrain. We show that manipulating expected precision strongly affects how much information an agent characteristically samples, and thus provides a possible link between impulsivity and dopaminergic dysfunction. Our study therefore represents a step toward understanding evidence accumulation in terms of neurobiologically plausible Bayesian inference and may cast light on why this process is disordered in psychopathology

    Collective states in social systems with interacting learning agents

    Full text link
    We consider a social system of interacting heterogeneous agents with learning abilities, a model close to Random Field Ising Models, where the random field corresponds to the idiosyncratic willingness to pay. Given a fixed price, agents decide repeatedly whether to buy or not a unit of a good, so as to maximize their expected utilities. We show that the equilibrium reached by the system depends on the nature of the information agents use to estimate their expected utilities.Comment: 18 pages, 26 figure

    Learning the Preferences of Ignorant, Inconsistent Agents

    Full text link
    An important use of machine learning is to learn what people value. What posts or photos should a user be shown? Which jobs or activities would a person find rewarding? In each case, observations of people's past choices can inform our inferences about their likes and preferences. If we assume that choices are approximately optimal according to some utility function, we can treat preference inference as Bayesian inverse planning. That is, given a prior on utility functions and some observed choices, we invert an optimal decision-making process to infer a posterior distribution on utility functions. However, people often deviate from approximate optimality. They have false beliefs, their planning is sub-optimal, and their choices may be temporally inconsistent due to hyperbolic discounting and other biases. We demonstrate how to incorporate these deviations into algorithms for preference inference by constructing generative models of planning for agents who are subject to false beliefs and time inconsistency. We explore the inferences these models make about preferences, beliefs, and biases. We present a behavioral experiment in which human subjects perform preference inference given the same observations of choices as our model. Results show that human subjects (like our model) explain choices in terms of systematic deviations from optimal behavior and suggest that they take such deviations into account when inferring preferences.Comment: AAAI 201

    Rationality of Belief Or: Why Savage's axioms are neither necessary nor sufficient for rationality, Second Version

    Get PDF
    Economic theory reduces the concept of rationality to internal consistency. The practice of economics, however, distinguishes between rational and irrational beliefs. There is therefore an interest in a theory of rational beliefs, and of the process by which beliefs are generated and justified. We argue that the Bayesian approach is unsatisfactory for this purpose, for several reasons. First, the Bayesian approach begins with a prior, and models only a very limited form of learning, namely, Bayesian updating. Thus, it is inherently incapable of describing the formation of prior beliefs. Second, there are many situations in which there is not sufficient information for an individual to generate a Bayesian prior. It follows that the Bayesian approach is neither sufficient not necessary for the rationality of beliefs.Decision making, Bayesian, Behavioral Economics

    Rationality of Belief Or: Why Savage's axioms are neither necessary nor sufficient for rationality, Second Version

    Get PDF
    Economic theory reduces the concept of rationality to internal consistency. As far as beliefs are concerned, rationality is equated with having a prior belief over a “Grand State Space”, describing all possible sources of uncertainties. We argue that this notion is too weak in some senses and too strong in others. It is too weak because it does not distinguish between rational and irrational beliefs. Relatedly, the Bayesian approach, when applied to the Grand State Space, is inherently incapable of describing the formation of prior beliefs. On the other hand, this notion of rationality is too strong because there are many situations in which there is not sufficient information for an individual to generate a Bayesian prior. It follows that the Bayesian approach is neither sufficient not necessary for the rationality of beliefs.Decision making, Bayesian, Behavioral Economics
    • …
    corecore