384 research outputs found

    Group disagreement: a belief aggregation perspective

    Get PDF
    The debate on the epistemology of disagreement has so far focused almost exclusively on cases of disagreement between individual persons. Yet, many social epistemologists agree that at least certain kinds of groups are equally capable of having beliefs that are open to epistemic evaluation. If so, we should expect a comprehensive epistemology of disagreement to accommodate cases of disagreement between group agents, such as juries, governments, companies, and the like. However, this raises a number of fundamental questions concerning what it means for groups to be epistemic peers and to disagree with each other. In this paper, we explore what group peer disagreement amounts to given that we think of group belief in terms of List and Pettitā€™s ā€˜belief aggregation modelā€™. We then discuss how the so-called ā€˜equal weight viewā€™ of peer disagreement is best accommodated within this framework. The account that seems most promising to us says, roughly, that the parties to a group peer disagreement should adopt the belief that results from applying the most suitable belief aggregation function for the combined group on all members of the combined group. To motivate this view, we test it against various intuitive cases, derive some of its notable implications, and discuss how it relates to the equal weight view of individual peer disagreement

    Confirmation, Decision, and Evidential Probability

    Get PDF
    Henry Kyburgā€™s theory of Evidential Probability offers a neglected tool for approaching problems in confirmation theory and decision theory. I use Evidential Probability to examine some persistent problems within these areas of the philosophy of science. Formal tools in general and probability theory in particular have great promise for conceptual analysis in confirmation theory and decision theory, but they face many challenges. In each chapter, I apply Evidential Probability to a specific issue in confirmation theory or decision theory. In Chapter 1, I challenge the notion that Bayesian probability offers the best basis for a probabilistic theory of evidence. In Chapter 2, I criticise the conventional measures of quantities of evidence that use the degree of imprecision of imprecise probabilities. In Chapter 3, I develop an alternative to orthodox utility-maximizing decision theory using Kyburgā€™s system. In Chapter 4, I confront the orthodox notion that Nelson Goodmanā€™s New Riddle of Induction makes purely formal theories of induction untenable. Finally, in Chapter 5, I defend probabilistic theories of inductive reasoning against John D. Nortonā€™s recent collection of criticisms. My aim is the development of fresh perspectives on classic problems and contemporary debates. I both defend and exemplify a formal approach to the philosophy of science. I argue that Evidential Probability has great potential for clarifying our concepts of evidence and rationality

    Can free evidence be bad? Value of informationfor the imprecise probabilist

    Get PDF
    This paper considers a puzzling conflict between two positions that are each compelling: it is irrational for an agent to pay to avoid `free' evidence before making a decision, and rational agents may have imprecise beliefs and/or desires. Indeed, we show that Good's theorem concerning the invariable choice-worthiness of free evidence does not generalise to the imprecise realm, given the plausible existing decision theories for handling imprecision. A key ingredient in the analysis, and a potential source of controversy, is the general approach taken for resolving sequential decision problems { we make explicit what the key alternatives are and defend our own approach. Furthermore, we endorse a resolution of the aforementioned puzzle { we privilege decision theories that merely permit avoiding free evidence over decision theories for which avoiding free evidence is uniquely admissible. Finally, we situate this particular result about free evidence within the broader `dynamic-coherence' literature

    Approximations from Anywhere and General Rough Sets

    Full text link
    Not all approximations arise from information systems. The problem of fitting approximations, subjected to some rules (and related data), to information systems in a rough scheme of things is known as the \emph{inverse problem}. The inverse problem is more general than the duality (or abstract representation) problems and was introduced by the present author in her earlier papers. From the practical perspective, a few (as opposed to one) theoretical frameworks may be suitable for formulating the problem itself. \emph{Granular operator spaces} have been recently introduced and investigated by the present author in her recent work in the context of antichain based and dialectical semantics for general rough sets. The nature of the inverse problem is examined from number-theoretic and combinatorial perspectives in a higher order variant of granular operator spaces and some necessary conditions are proved. The results and the novel approach would be useful in a number of unsupervised and semi supervised learning contexts and algorithms.Comment: 20 Pages. Scheduled to appear in IJCRS'2017 LNCS Proceedings, Springe

    Primer For The Nonmathematically Inclined On Mathematical Evidence In Criminal Cases: People V. Collins And Beyond

    Full text link

    Belief and credence

    Get PDF

    Scientiļ¬c uncertainty and decision making

    Get PDF
    It is important to have an adequate model of uncertainty, since decisions must be made before the uncertainty can be resolved. For instance, ļ¬‚ood defenses must be designed before we know the future distribution of ļ¬‚ood events. It is standardly assumed that probability theory oļ¬€ers the best model of uncertain information. I think there are reasons to be sceptical of this claim. I criticise some arguments for the claim that probability theory is the only adequate model of uncertainty. In particular I critique Dutch book arguments, representation theorems, and accuracy based arguments. Then I put forward my preferred model: imprecise probabilities. These are sets of probability measures. I oļ¬€er several motivations for this model of uncertain belief, and suggest a number of interpretations of the framework. I also defend the model against some criticisms, including the so-called problem of dilation. I apply this framework to decision problems in the abstract. I discuss some decision rules from the literature including Leviā€™s E-admissibility and the more permissive rule favoured by Walley, among others. I then point towards some applications to climate decisions. My conclusions are largely negative: decision making under such severe uncertainty is inevitably diļ¬ƒcult. I ļ¬nish with a case study of scientiļ¬c uncertainty. Climate modellers attempt to oļ¬€er probabilistic forecasts of future climate change. There is reason to be sceptical that the model probabilities oļ¬€ered really do reļ¬‚ect the chances of future climate change, at least at regional scales and long lead times. Indeed, scientiļ¬c uncertainty is multi-dimensional, and diļ¬ƒcult to quantify. I argue that probability theory is not an adequate representation of the kinds of severe uncertainty that arise in some areas in science. I claim that this requires that we look for a better framework for modelling uncertaint

    On the Methodological and Normative Foundations of Probabilism

    Get PDF
    This dissertation is an elaboration and defense of probabilism, the view that belief comes in various degrees of strength, and that the probability calculus provides coherence norms for these degrees of belief. Probabilism faces several well-known objections. For example, critics object that probabilismā€™s numerical representation of degrees of belief is too precise, and that its coherence norms are too demanding for real human agents to follow. While probabilists have developed several plausible responses to these objections, the compatibility among these responses is unclear. On this basis, I argue that probabilists must articulate unified methodological and normative foundations for their view, and I sketch the foundations of a probabilist modeling framework, the Comparative Confidence Framework (CCF). CCF characterizes probabilism primarily as an account of ideal degree of belief coherence. CCF provides a set of fundamentally qualitative and comparativeā€”rather than quantitativeā€”evaluative ideals for degree of belief coherence. By providing qualitative, comparative, evaluative coherence norms for degrees of belief, CCF avoids the aforementioned objections: that probabilismā€™s formal representation of degrees of belief is too precise, and that its norms are too demanding. CCF is a first step in the development of unified foundations for a wider subjectivist Bayesian theory of doxastic and pragmatic rationality
    • ā€¦
    corecore