32 research outputs found

    Obligation, Permission, and Bayesian Orgulity

    Get PDF
    This essay has two aims. The first is to correct an increasingly popular way of misunderstanding Belot's Orgulity Argument. The Orgulity Argument charges Bayesianism with defect as a normative epistemology. For concreteness, our argument focuses on Cisewski et al.'s recent rejoinder to Belot. The conditions that underwrite their version of the argument are too strong and Belot does not endorse them on our reading. A more compelling version of the Orgulity Argument than Cisewski et al. present is available, however---a point that we make by drawing an analogy with de Finetti's argument against mandating countable additivity. Having presented the best version of the Orgulity Argument, our second aim is to develop a reply to it. We extend Elga's idea of appealing to finitely additive probability to show that the challenge posed by the Orgulity Argument can be met

    Another Approach to Consensus and Maximally Informed Opinions with Increasing Evidence

    Get PDF
    Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the literature. We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling

    Persistent Disagreement and Polarization in a Bayesian Setting

    Get PDF
    For two ideally rational agents, does learning a finite amount of shared evidence necessitate agreement? No. But does it at least guard against belief polarization, the case in which their opinions get further apart? No. OK, but are rational agents guaranteed to avoid polarization if they have access to an infinite, increasing stream of shared evidence? No

    Distention for Sets of Probabilities

    Get PDF
    A prominent pillar of Bayesian philosophy is that, relative to just a few constraints, priors “wash out” in the limit. Bayesians often appeal to such asymptotic results as a defense against charges of excessive subjectivity. But, as Seidenfeld and coauthors observe, what happens in the short run is often of greater interest than what happens in the limit. They use this point as one motivation for investigating the counterintuitive short run phenomenon of dilation since, it is alleged, “dilation contrasts with the asymptotic merging of posterior probabilities reported by Savage (1954) and by Blackwell and Dubins (1962)” (Herron et al., 1994). A partition dilates an event if, relative to every cell of the partition, uncertainty concerning that event increases. The measure of uncertainty relevant for dilation, however, is not the same measure that is relevant in the context of results concerning whether priors wash out or “opinions merge.” Here, we explicitly investigate the short run behavior of the metric relevant to merging of opinions. As with dilation, it is possible for uncertainty (as gauged by this metric) to increase relative to every cell of a partition. We call this phenomenon distention. It turns out that dilation and distention are orthogonal phenomena

    On the role of explanatory and systematic power in scientific reasoning

    Get PDF
    The paper investigates measures of explanatory power and how to define the inference schema “Inference to the Best Explanation”. It argues that these measures can also be used to quantify the systematic power of a hypothesis and the inference schema “Inference to the Best Systematization” is defined. It demonstrates that systematic power is a fruitful criterion for theory choice and IBS is truth-conducive. It also shows that even radical Bayesians must admit that systemic power is an integral component of Bayesian reasoning. Finally, the paper puts the achieved results in perspective with van Fraassen’s famous criticism of IB

    Speed-optimal Induction and Dynamic Coherence

    Get PDF
    A standard way to challenge convergence-based accounts of inductive success is to claim that they are too weak to constrain inductive inferences in the short run. We respond to such a challenge by answering some questions raised by Juhl (1994). When it comes to predicting limiting relative frequencies in the framework of Reichenbach, we show that speed-optimal convergence—a long-run success condition—induces dynamic coherence in the short run

    Putnam's Diagonal Argument and the Impossibility of a Universal Learning Machine

    Get PDF
    The diagonalization argument of Putnam (1963) denies the possibility of a universal learning machine. Yet the proposal of Solomonoff (1964) and Levin (1970) promises precisely such a thing. In this paper I discuss how their proposed measure function manages to evade Putnam's diagonalization in one respect, only to fatally fall prey to it in another

    Learning and Pooling, Pooling and Learning

    Get PDF
    We explore which types of probabilistic updating commute with convex IP pooling (Stewart and Ojea Quintana 2017). Positive results are stated for Bayesian conditionalization (and a mild generalization of it), imaging, and a certain parameterization of Jeffrey conditioning. This last observation is obtained with the help of a slight generalization of a characterization of (precise) externally Bayesian pooling operators due to Wagner (Log J IGPL 18(2):336--345, 2009). These results strengthen the case that pooling should go by imprecise probabilities since no precise pooling method is as versatile

    A Dutch Book Theorem and Converse Dutch Book Theorem for Kolmogorov Conditionalization

    Get PDF
    This paper discusses how to update one’s credences based on evidence that has initial probability 0. I advance a diachronic norm, Kolmogorov Conditionalization, that governs credal reallocation in many such learning scenarios. The norm is based upon Kolmogorov’s theory of conditional probability. I prove a Dutch book theorem and converse Dutch book theorem for Kolmogorov Conditionalization. The two theorems establish Kolmogorov Conditionalization as the unique credal reallocation rule that avoids a sure loss in the relevant learning scenarios

    A Dutch Book Theorem and Converse Dutch Book Theorem for Kolmogorov Conditionalization

    Get PDF
    This paper discusses how to update one’s credences based on evidence that has initial probability 0. I advance a diachronic norm, Kolmogorov Conditionalization, that governs credal reallocation in many such learning scenarios. The norm is based upon Kolmogorov’s theory of conditional probability. I prove a Dutch book theorem and converse Dutch book theorem for Kolmogorov Conditionalization. The two theorems establish Kolmogorov Conditionalization as the unique credal reallocation rule that avoids a sure loss in the relevant learning scenarios
    corecore