1,262 research outputs found
Extending the original position : revisiting the Pattanaik critique of Vickrey/Harsanyi utilitarianism
Harsanyi's original position treats personal identity, upon which each individual's utility depends, as risky. Pattanaik's critique is related to the problem of scaling "state-dependent" von Neumann-Morgenstern
utility when determining subjective probabilities. But a unique social welfare functional, incorporating both level and unit interpersonal comparisons, emerges from contemplating an "extended" original position allowing the probability of becoming each person to be chosen.
Moreover, the paper suggests the relevance of a "Harsanyi ethical type space", with types as both causes and objects of preference
Isolation, Assurance and Rules : Can Rational Folly Supplant Foolish Rationality?
Consider an âisolation paradoxâ game with many identical players. By definition, conforming to a rule which maximizes average utility is individually a strictly dominated strategy. Suppose, however, that some players think âquasi-magicallyâ in accordance with evidential (but not causal) decision theory. That is, they act as if othersâ disposition to conform, or not, is affected by their own behavior, even though they do not actually believe there is a causal link. Standard game theory excludes this. Yet such ârational follyâ can sustain ârule utilitarianâ cooperative behavior. Comparisons are made with Newcombâs problem, and with related attempts to resolve prisonerâs dilemma.
Laboratory Games and Quantum Behaviour: The Normal Form with a Separable State Space
The subjective expected utility (SEU) criterion is formulated for a particular four-person âlaboratory gameâ that a Bayesian rational decision maker plays with Nature, Chance, and an Experimenter who influences what quantum behaviour is observable by choosing an orthonormal basis in a separable complex Hilbert space of latent variables. Nature chooses a state in this basis, along with an observed data series governing Chance's random choice of consequence. When Gleason's theorem holds, imposing quantum equivalence implies that the expected likelihood of any data series w.r.t. prior beliefs equals the trace of the product of appropriate subjective density and likelihood operators.
History : Sunk Cost, or Widespread Externality?
In an intertemporal Arrow-Debreu economy with a continuum of agents, suppose that the auctioneer sets prices while the government institutes optimal lump-sum transfers period by period. An earlier paper showed how subgame imperfections arise because agents understand how their current decisions such as those determining investment will influence future lump-sum transfers. This observation undermines the second efficiency theorem of welfare economics and makes âhistoryâ a widespread externality. A two-period model is used to investigate the constrained efficiency properties of different kinds of equilibrium. Possibilities for remedial policy are also discussed.
Beyond Normal Form Invariance : First Mover Advantage in Two-Stage Games with or without Predictable Cheap Talk
Von Neumann (1928) not only introduced a fairly general version of the extensive form game concept. He also hypothesized that only the normal form was relevant to rational play. Yet even in Battle of the Sexes, this hypothesis seems contradicted by players' actual behaviour in experiments. Here a refined Nash equilibrium is proposed for games where one player moves first, and the only other player moves second without knowing the first move. The refinement relies on a tacit understanding of the only credible and straightforward perfect Bayesian equilibrium in a corresponding game allowing a predictable direct form of cheap talk.
Rationality and dynamic consistency under risk and uncertainty
For choice with deterministic consequences, the standard rationality hypothesis is ordinality - i.e., maximization of a weak preference ordering. For choice under risk (resp. uncertainty), preferences are assumed to be represented by the objectively (resp. subjectively) expected value of a von Neumann{Morgenstern utility function. For choice under risk, this implies a key independence axiom; under uncertainty, it implies some version of Savage's sure thing principle. This chapter investigates the extent to which ordinality, independence, and the sure thing principle can be derived from more fundamental axioms concerning behaviour in decision trees. Following Cubitt (1996), these principles include dynamic consistency, separability, and reduction of sequential choice, which can be derived in turn from one consequentialist hypothesis applied to continuation subtrees as well as entire decision trees. Examples of behavior violating these principles are also reviewed, as are possible explanations of why such violations are often observed in experiments
A three-stage experimental test of revealed preference
A powerful test of Varian's (1982) generalised axiom of revealed preference
(GARP) with two goods requires the consumer's budget line to pass through
two demand vectors revealed as chosen given other budget sets. In an experiment
using this idea, each of 41 student subjects faced a series of 16 successive
grouped portfolio selection problems. Each group of selection problems
had up to three stages, where later budget sets depended on that subject'choices at earlier stages in the same group. Only 49% of subjects' choices
were observed to satisfy GARP exactly, even by our relatively generous nonparametric
test
Monte Carlo Simulation of Macroeconomic Risk with a Continuum Agents : The General Case
In large random economies with heterogeneous agents, a standard stochastic framework presumes a random macro state, combined with idiosyncratic micro shocks. This can be formally represented by a ran-dom process consisting of a continuum of random variables that are conditionally independent given the macro state. However, this process satisfies a standard joint measurability condition only if there is essentially no idiosyncratic risk at all. Based on iteratively complete product measure spaces, we characterize the validity of the standard stochastic framework via Monte Carlo simulation as well as event-wise measurable conditional probabilities. These general characterizations also allow us to strengthen some earlier results related to exchangeability and independence.large economy ; event-wise measurable conditional probabilities ; ex-changeability ; conditional independence ; Monte Carlo convergence ; Monte Carlo-algebra ; stochastic macro structure
Characterization of Risk : A Sharp Law of Large Numbers
An extensive literature in economics uses a continuum of random variables to model individual random shocks imposed on a large population. Let H denote the Hilbert space of square-integrable random variables. A key concern is to characterize the family of all H-valued functions that satisfy the law of large numbers when a large sample of agents is drawn at random. We use the iterative extension of an infinite product measure introduced in [6] to formulate a âsharpâ law of large numbers. We prove that an H-valued function satisfies this law if and only if it is both Pettis-integrable and norm integrably bounded.
Catastrophic risk, rare events, and black swans : could there be a countably additive synthesis?
Catastrophic risk, rare events, and black swans are phenomena that require special attention in normative decision theory. Several papers by Chichilnisky integrate them into a single framework with finitely additive subjective probabilities.
Some precursors include: (i) following Jones-Lee (1974), undefined
willingness to pay to avoid catastrophic risk; (ii) following R´enyi (1955, 1956) and many successors, rare events whose probability is infinitesimal. Also, when rationality is bounded, enlivened decision trees can represent a dynamic process involving successively unforeseen âtrue black swanâ events. One conjectures that a different integrated framework could be developed to include these three phenomena while preserving countably additive probabilities
- âŚ