51 research outputs found

    Decision under Uncertainty : the Classical Models

    Get PDF
    This chapiter of a collective book is dedicated to classical decision models under uncertainty, i.e. under situations where events do not have "objective" probabilities with which the Decision Marker agrees. We present successively the two main theories, their axiomatic, the interpretation and the justification of their axioms and their main properties : first, the general model of Subjective Expected Utility due to Savage (Savage, 1954), second, the Anscombe-Aumann (1963) theory, in a different framework. Both theories enforce the universal use of a probabilistic representation. We then discuss this issue in connection with the experimental result known as the Ellsberg paradox.Uncertainty, subjective probability, Subjective Expected Utility, Savage, Anscombe and Aumann, Ellsberg paradox.

    Regular updating

    Get PDF
    We study the Full Bayesian Updating rule for convex capacities. Following a route suggested by Jaffray (1992), we define some properties one may want to impose on the updating process, and identify the classes of (convex and strictly positive) capacities that satisfy these properties for the Full Bayesian updating rule.

    Decision making with belief functions: Compatibility and incompatibility with the sure-thing principle

    Get PDF
    This article studies situations in which information is ambiguous and only part of it can be probabilized. It is shown that the information can be modeled through belief functions if and only if the nonprobabilizable information is subject to the principles of complete ignorance. Next the representability of decisions by belief functions on outcomes is justified by means of a neutrality axiom. The natural weakening of Savage's sure-thing principle to unambiguous events is examined and its implications for decision making are identified

    Duhemian Themes in Expected Utility Theory

    Get PDF
    This monographic chapter explains how expected utility (EU) theory arose in von Neumann and Morgenstern, how it was called into question by Allais and others, and how it gave way to non-EU theories, at least among the specialized quarters of decion theory. I organize the narrative around the idea that the successive theoretical moves amounted to resolving Duhem-Quine underdetermination problems, so they can be assessed in terms of the philosophical recommendations made to overcome these problems. I actually follow Duhem's recommendation, which was essentially to rely on the passing of time to make many experiments and arguments available, and evebntually strike a balance between competing theories on the basis of this improved knowledge. Although Duhem's solution seems disappointingly vague, relying as it does on "bon sens" to bring an end to the temporal process, I do not think there is any better one in the philosophical literature, and I apply it here for what it is worth. In this perspective, EU theorists were justified in resisting the first attempts at refuting their theory, including Allais's in the 50s, but they would have lacked "bon sens" in not acknowledging their defeat in the 80s, after the long process of pros and cons had sufficiently matured. This primary Duhemian theme is actually combined with a secondary theme - normativity. I suggest that EU theory was normative at its very beginning and has remained so all along, and I express dissatisfaction with the orthodox view that it could be treated as a straightforward descriptive theory for purposes of prediction and scientific test. This view is usually accompanied with a faulty historical reconstruction, according to which EU theorists initially formulated the VNM axioms descriptively and retreated to a normative construal once they fell threatened by empirical refutation. From my historical study, things did not evolve in this way, and the theory was both proposed and rebutted on the basis of normative arguments already in the 1950s. The ensuing, major problem was to make choice experiments compatible with this inherently normative feature of theory. Compability was obtained in some experiments, but implicitly and somewhat confusingly, for instance by excluding overtly incoherent subjects or by creating strong incentives for the subjects to reflect on the questions and provide answers they would be able to defend. I also claim that Allais had an intuition of how to combine testability and normativity, unlike most later experimenters, and that it would have been more fruitful to work from his intuition than to make choice experiments of the naively empirical style that flourished after him. In sum, it can be said that the underdetermination process accompanying EUT was resolved in a Duhemian way, but this was not without major inefficiencies. To embody explicit rationality considerations into experimental schemes right from the beginning would have limited the scope of empirical research, avoided wasting resources to get only minor findings, and speeded up the Duhemian process of groping towards a choice among competing theories

    Dynamic Choice under Ambiguity

    Full text link
    This paper analyzes sophisticated dynamic choice for ambiguity-sensitive decision makers. It characterizes Consistent Planning via axioms on preferences over decision trees. Furthermore, it shows how to elicit conditional preferences from prior preferences. The key axiom is a weakening of Dynamic Consistency, deemed Sophistication. The analysis accommodates arbitrary decision models and updating rules. Hence, the results indicate that (i) ambiguity attitudes, (ii) updating rules, and (iii) sophisticated dynamic choice are mutually orthogonal aspects of preferences. As an example, a characterization of prior-by-prior Bayesian updating and Consistent Planning for arbitrary maxmin-expected utility preferences is presented. The resulting sophisticated MEU preferences are then used to analyze the value of information under ambiguity; a basic trade-off between information acquisition and commitment is highlighted

    Implementing resolute choice under uncertainty

    No full text
    International audienceThe adaptation to situations of sequential choice under uncertainty of decision criteria which deviate from (subjective) expected utility raises the problem of ensuring the selection of a nondominated strategy. In particular, when following the suggestion of Machina and McClennen of giving up separability (also known as consequentialism), which requires the choice of a substrategy in a subtree to depend only on data relevant to that subtree, one must renounce to the use of dynamic programming, since Bellman's principle is no longer valid. An interpretation of McClennen's resolute choice, based on cooperation between the successive Selves of the decision maker, is proposed. Implementations of resolute choice which prevent Money Pumps, negative prices of information or, more generally, choices of dominated strategies, while remaining computationally tractable, are proposed

    Réseaux bayésiens

    No full text
    National audienc

    Implementing resolute choice under uncertainty

    No full text
    International audienceThe adaptation to situations of sequential choice under uncertainty of decision criteria which deviate from (subjective) expected utility raises the problem of ensuring the selection of a nondominated strategy. In particular, when following the suggestion of Machina and McClennen of giving up separability (also known as consequentialism), which requires the choice of a substrategy in a subtree to depend only on data relevant to that subtree, one must renounce to the use of dynamic programming, since Bellman's principle is no longer valid. An interpretation of McClennen's resolute choice, based on cooperation between the successive Selves of the decision maker, is proposed. Implementations of resolute choice which prevent Money Pumps, negative prices of information or, more generally, choices of dominated strategies, while remaining computationally tractable, are proposed
    corecore