23 research outputs found

    Anchoring in Deliberations

    Get PDF
    Deliberation is a standard procedure to make decisions in not too large groups. It has the advantage that the group members can learn from each other and that, at the end, often a consensus emerges that everybody endorses. But a deliberation procedure also has a number of disadvantages. E.g., what consensus is reached usually depends on the order in which the different group members speak. More specifically, the group member who speaks first often has an unproportionally high impact on the final decision: She anchors the deliberation process. While the anchoring effect undoubtably appears in real deliberating groups, we ask whether it also appears in groups whose members are truth-seeking and rational in the sense that they take the information provided by their fellow group members properly into account by updating their beliefs according to plausible rules. To answer this question and to make some progress towards explaining the anchoring effect, a formal model is constructed and analyzed. Using this model, we study the anchoring effect in homogenous groups (i.e. groups whose members consider each other as equally reliable), for which we provide analytical results, and in inhomogeneous groups, for which we provide simulation results

    Probabilities with Gaps and Gluts

    Get PDF
    Belnap-Dunn logic (BD), sometimes also known as First Degree Entailment, is a four-valued propositional logic that complements the classical truth values of True and False with two non-classical truth values Neither and Both. The latter two are to account for the possibility of the available information being incomplete or providing contradictory evidence. In this paper, we present a probabilistic extension of BD that permits agents to have probabilistic beliefs about the truth and falsity of a proposition. We provide a sound and complete axiomatization for the framework defined and also identify policies for conditionalization and aggregation. Concretely, we introduce four-valued equivalents of Bayes' and Jeffrey updating and also suggest mechanisms for aggregating information from different sources

    Learning from Conditionals

    Get PDF
    In this article, we address a major outstanding question of probabilistic Bayesian epistemology: `How should a rational Bayesian agent update their beliefs upon learning an indicative conditional?'. A number of authors have recently contended that this question is fundamentally underdetermined by Bayesian norms, and hence that there is no single update procedure that rational agents are obliged to follow upon learning an indicative conditional. Here, we resist this trend and argue that a core set of widely accepted Bayesian norms is sufficient to identify a normatively privileged updating procedure for this kind of learning. Along the way, we justify a privileged formalisation of the notion of `epistemic conservativity', offer a new analysis of the Judy Benjamin problem and emphasise the distinction between interpreting the content of new evidence and updating one's beliefs on the basis of that content

    Learning Probabilities: Towards a Logic of Statistical Learning

    Get PDF
    We propose a new model for forming beliefs and learning about unknown probabilities (such as the probability of picking a red marble from a bag with an unknown distribution of coloured marbles). The most widespread model for such situations of 'radical uncertainty' is in terms of imprecise probabilities, i.e. representing the agent's knowledge as a set of probability measures. We add to this model a plausibility map, associating to each measure a plausibility number, as a way to go beyond what is known with certainty and represent the agent's beliefs about probability. There are a number of standard examples: Shannon Entropy, Centre of Mass etc. We then consider learning of two types of information: (1) learning by repeated sampling from the unknown distribution (e.g. picking marbles from the bag); and (2) learning higher-order information about the distribution (in the shape of linear inequalities, e.g. we are told there are more red marbles than green marbles). The first changes only the plausibility map (via a 'plausibilistic' version of Bayes' Rule), but leaves the given set of measures unchanged; the second shrinks the set of measures, without changing their plausibility. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. But our belief change does not comply with standard AGM axioms, since the revision induced by (1) is of a non-AGM type. This is essential, as it allows our agents to learn the true probability: we prove that the beliefs obtained by repeated sampling converge almost surely to the correct belief (in the true probability). We end by sketching the contours of a dynamic doxastic logic for statistical learning.Comment: In Proceedings TARK 2019, arXiv:1907.0833

    Voting, Deliberation and Truth

    Get PDF
    There are various ways to reach a group decision on a factual yes-no question. One way is to vote and decide what the majority votes for. This procedure receives some epistemological support from the Condorcet Jury Theorem. Alternatively, the group members may prefer to deliberate and will eventually reach a decision that everybody endorses - a consensus. While the latter procedure has the advantage that it makes everybody happy (as everybody endorses the consensus), it has the disadvantage that it is difficult to implement, especially for larger groups. Besides, the resulting consensus may be far away from the truth. And so we ask: Is deliberation truth-conducive in the sense that majority voting is? To address this question, we construct a highly idealized model of a particular deliberation process, inspired by the movie Twelve Angry Men, and show that the answer is "yes". Deliberation procedures can be truth-conducive just as the voting procedure is. We then explore, again on the basis of our model and using agent-based simulations, under which conditions it is better epistemically to deliberate than to vote. Our analysis shows that there are contexts in which deliberation is epistemically preferable and we will provide reasons for why this is so

    Learning from Conditionals

    Get PDF
    In this article, we address a major outstanding question of probabilistic Bayesian epistemology: `How should a rational Bayesian agent update their beliefs upon learning an indicative conditional?'. A number of authors have recently contended that this question is fundamentally underdetermined by Bayesian norms, and hence that there is no single update procedure that rational agents are obliged to follow upon learning an indicative conditional. Here, we resist this trend and argue that a core set of widely accepted Bayesian norms is sufficient to identify a normatively privileged updating procedure for this kind of learning. Along the way, we justify a privileged formalisation of the notion of `epistemic conservativity', offer a new analysis of the Judy Benjamin problem and emphasise the distinction between interpreting the content of new evidence and updating one's beliefs on the basis of that content

    Learning from Conditionals

    Get PDF
    In this article, we address a major outstanding question of probabilistic Bayesian epistemology: `How should a rational Bayesian agent update their beliefs upon learning an indicative conditional?'. A number of authors have recently contended that this question is fundamentally underdetermined by Bayesian norms, and hence that there is no single update procedure that rational agents are obliged to follow upon learning an indicative conditional. Here, we resist this trend and argue that a core set of widely accepted Bayesian norms is sufficient to uniquely identify a single rational updating procedure for this kind of learning. Along the way, we justify a privileged formalisation of the notion of `epistemic conservativity', offer a new analysis of the Judy Benjamin problem and emphasise the distinction between interpreting the content of new evidence and updating one's beliefs on the basis of that content

    Determining Maximal Entropy Functions for Objective Bayesian Inductive Logic

    Get PDF
    According to the objective Bayesian approach to inductive logic, premisses inductively entail a conclusion just when every probability function with maximal entropy, from all those that satisfy the premisses, satisfies the conclusion. When premisses and conclusion are constraints on probabilities of sentences of a first-order predicate language, however, it is by no means obvious how to determine these maximal entropy functions. This paper makes progress on the problem in the following ways. Firstly, we introduce the concept of a limit in entropy and show that, if the set of probability functions satisfying the premisses contains a limit in entropy, then this limit point is unique and is the maximal entropy probability function. Next, we turn to the special case in which the premisses are categorical sentences of the logical language. We show that if the uniform probability function gives the premisses positive probability, then the maximal entropy function can be found by simply conditionalising this uniform prior on the premisses. We generalise our results to demonstrate agreement between the maximal entropy approach and Jeffrey conditionalisation in the case in which there is a single premiss that specifies the probability of a sentence of the language. We show that, after learning such a premiss, certain inferences are preserved, namely inferences to inductive tautologies. Finally, we consider potential pathologies of the approach: we explore the extent to which the maximal entropy approach is invariant under permutations of the constants of the language, and we discuss some cases in which there is no maximal entropy probability function

    Towards the entropy-limit conjecture

    Get PDF
    The maximum entropy principle is widely used to determine non-committal probabilities on a finite domain, subject to a set of constraints, but its application to continuous domains is notoriously problematic. This paper concerns an intermediate case, where the domain is a first-order predicate language. Two strategies have been put forward for applying the maximum entropy principle on such a domain: (i) applying it to finite sublanguages and taking the pointwise limit of the resulting probabilities as the size n of the sublanguage increases; (ii) selecting a probability function on the language as a whole whose entropy on finite sublanguages of size n is not dominated by that of any other probability function for sufficiently large n. The entropy-limit conjecture says that, where these two approaches yield determinate probabilities, the two methods yield the same probabilities. If this conjecture is found to be true, it would provide a boost to the project of seeking a single canonical inductive logic—a project which faltered when Carnap's attempts in this direction succeeded only in determining a continuum of inductive methods. The truth of the conjecture would also boost the project of providing a canonical characterisation of normal or default models of first-order theories. Hitherto, the entropy-limit conjecture has been verified for languages which contain only unary predicate symbols and also for the case in which the constraints can be captured by a categorical statement of quantifier complexity. This paper shows that the entropy-limit conjecture also holds for categorical statements of complexity, for various non-categorical constraints, and in certain other general situations
    corecore