18 research outputs found
Why Credences Cannot be Imprecise
Beliefs formed under uncertainty come in different grades, which are called credences or degrees of belief. The most common way of measuring the strength of credences is by ascribing probabilities to them. What kind of probabilities may be used remains an open question and divides the researchers in two camps: the sharpers who claim that credences can be measured by the standard single-valued precise probabilities. The non-sharpers, on the other hand, claim that credences are imprecise and can only be measured by imprecise probabilities. The latter view has recently gained in popularity. According to non-sharpers, credences must be imprecise when the evidence is essentially imprecise (ambiguous, vague, conflicting or scarce).
This view is, however, misleading. Imprecise credences can lead to irrational behaviour and do not make much sense after a closer examination. I provide a coherence-based principle which enables me to demonstrate that there is no need for imprecise credences. This principle is then applied to three special cases, which are prima facie best explained by use of imprecise credences: the jellyfish guy case, Ellsberg paradox and the Sleeping Beauty problem.
The jellyfish guy case deals with a strange situation, where the evidence is very ambiguous. Ellsberg Paradox demonstrates a problem that occurs when comparing precise and imprecise credences. The Sleeping Beauty problem demonstrates that imprecise credences are not useless, but rather misguided. They should be understood as sets of possible precise credences, of which only one can be selected at a given time
Bayesians Still Don't Learn from Conditionals
One of the open questions in Bayesian epistemology is how to rationally learn from indicative conditionals (Douven, 2016). Eva et al. (Mind 129(514):461-508, 2020) propose a strategy to resolve this question. They claim that their strategy provides a uniquely rational response to any given learning scenario. We show that their updating strategy is neither very general nor always rational. Even worse, we generalize their strategy and show that it still fails. Bad news for the Bayesians
Jeffrey conditionalization: proceed with caution
It has been argued that if the rigidity condition is satisfied, a rational agent operating with uncertain evidence should update her subjective probabilities by Jeffrey conditionalization (JC) or else a series of bets resulting in a sure loss could be made against her (the Dynamic Dutch Book Argument). We show, however, that even if the rigidity condition is satisfied, it is not always safe to update probability distributions by JC because there exist such sequences of non-misleading uncertain observations where it may be foreseen that an agent who updates her subjective probabilities by JC will end up nearly certain that a false hypothesis is true. We analyze the features of JC that lead to this problem, specify the conditions in which it arises and respond to potential objections
Inference to the Best Explanation in Uncertain Evidential Situations
It has recently been argued that a non-Bayesian probabilistic version of inference to the best explanation (IBE*) has a number of advantages over Bayesian conditionalization (Douven [2013]; Douven and Wenmackers [2017]). We investigate how IBE* could be generalized to uncertain evidential situations and formulate a novel updating rule IBE**. We then inspect how it performs in comparison to its Bayesian counterpart, Jeffrey conditionalization (JC), in a number of simulations where two agents, each updating by IBE** and JC, respectively, try to detect the bias of a coin while they are only partially certain what side the coin landed on. We show that IBE** more often prescribes high probability to the actual bias than JC. We also show that this happens considerably faster, that IBE** passes higher thresholds for high probability, and that it in general leads to more accurate probability distributions than JC
Against methodological gambling
Should a scientist rely on methodological triangulation? Heesen et al. (2019) recently provided a convincing affirmative answer. However, their approach requires belief gambles if the evidence is discordant. We instead propose epistemically modest triangulation (EMT), according to which one should withhold judgement in such cases. We show that for a scientist in a methodologically diffident situation the expected utility of EMT is greater than that of Heesen et al.’s (2019) triangulation or that of using a single method. We also show that EMT is more appropriate for increasing epistemic trust in science. In short: triangulate, but do not gamble with evidence
Lying, more or less: A computer simulation study of graded lies and trust dynamics
Partial lying denotes the cases where we partially believe something to be false but nevertheless assert it with the intent to deceive the addressee. We investigate how the severity of partial lying may be determined and how partial lies can be classified. We also study how much epistemic damage an agent suffers depending on the level of trust that she invests in the liar and the severity of the lies she is told. Our analysis is based on the results from exploratory computer simulations of an arguably rational Bayesian agent who is trying to determine how biased a coin is while observing the coin tosses and listening to a (partial) liar's misleading predictions about the outcomes. Our results provide an interesting testable hypothesis at the intersection of epistemology and ethics, namely that in the longer term partial lies lead to more epistemic damage than outright lies
Coherence of Information: What It Is and Why It Matters
Coherence considerations play an important role in science and in everyday reasoning. However, it is unclear what exactly is meant by coherence of information and why we prefer more coherent information over less coherent information. To answer these questions, we first explore how to explicate the dazzling notion of ``coherence'' and how to measure the coherence of an information set. To do so, we critique prima facie plausible proposals that incorporate normative principles such as ``Agreement'' or ``Dependence'' and then argue that the coherence of an information set is best understood as an indicator of the truth of the set under certain conditions. Using computer simulations, we then show that a new probabilistic measure of coherence that combines aspects of the two principles above, but without strictly satisfying either principle, performs particularly well in this regard
Lying, more or less: A computer simulation study of graded lies and trust dynamics
Partial lying denotes the cases where we partially believe something to be false but nevertheless assert it with the intent to deceive the addressee. We investigate how the severity of partial lying may be determined and how partial lies can be classified. We also study how much epistemic damage an agent suffers depending on the level of trust that she invests in the liar and the severity of the lies she is told. Our analysis is based on the results from exploratory computer simulations of an arguably rational Bayesian agent who is trying to determine how biased a coin is while observing the coin tosses and listening to a (partial) liar's misleading predictions about the outcomes. Our results provide an interesting testable hypothesis at the intersection of epistemology and ethics, namely that in the longer term partial lies lead to more epistemic damage than outright lies
Lying, more or less: A computer simulation study of graded lies and trust dynamics
Partial lying denotes the cases where we partially believe something to be false but nevertheless assert it with the intent to deceive the addressee. We investigate how the severity of partial lying may be determined and how partial lies can be classified. We also study how much epistemic damage an agent suffers depending on the level of trust that she invests in the liar and the severity of the lies she is told. Our analysis is based on the results from exploratory computer simulations of an arguably rational Bayesian agent who is trying to determine how biased a coin is while observing the coin tosses and listening to a (partial) liar's misleading predictions about the outcomes. Our results provide an interesting testable hypothesis at the intersection of epistemology and ethics, namely that in the longer term partial lies lead to more epistemic damage than outright lies
Confirmation, Coherence and the Strength of Arguments
Alongside science and law, argumentation is also of central importance in everyday life. But what characterizes a good argument? This question has occupied philosophers and psychologists for centuries. The theory of Bayesian argumentation is particularly suitable for clarifying it, because it allows us to take into account in a natural way the role of uncertainty, which is central to much argumentation. Moreover, it offers the possibility of measuring the strength of an argument in probabilistic terms. One way to do this, implicit in much work, is to identify the strength of an argument with the degree to which the premises of the argument confirm the conclusion. We criticize this prima facie plausible proposal and suggest instead that the strength of an argument has something to do with how much the premises and the conclusion of the argument cohere with each other. This leads to a new probabilistic measure whose properties we examine in more detail