21 research outputs found
When Rational Reasoners Reason Differently
Different people reason differently, which means that sometimes they reach different conclusions from the same evidence. We maintain that this is not only natural, but rational. In this essay we explore the epistemology of that state of affairs. First we will canvass arguments for and against the claim that rational methods of reasoning must always reach the same conclusions from the same evidence. Then we will consider whether the acknowledgment that people have divergent rational reasoning methods should undermine one’s confidence in one’s own reasoning. Finally we will explore how agents who employ distinct yet equally rational methods of reasoning should respond to interactions with the products of each others’ reasoning. We find that the epistemology of multiple reasoning methods has been misunderstood by a number of authors writing on epistemic permissiveness and peer disagreement
In Defense of Right Reason
Starting from the premise that akrasia is irrational, I argue that it is always a rational mistake to have false beliefs about the requirements of rationality. Using that conclusion, I defend logical omniscience requirements, the claim that one can never have all-things-considered misleading evidence about what's rational, and the Right Reasons position concerning peer disagreement
Deference Done Right
There are many kinds of epistemic experts to which we might wish to defer in setting our credences. These include: highly rational agents, objective chances, our own future credences, our own current credences, and evidential (or logical) probabilities. But exactly what constraint does a deference requirement place on an agent's credences? In this paper we consider three answers, inspired by three principles that have been proposed for deference to objective chances. We consider how these options fare when applied to the other kinds of epistemic experts mentioned above. Of the three deference principles we consider, we argue that two of the options face insuperable difficulties. The third, on the other hand, fares well|at least when it is applied in a particular way
Plausible Permissivism
Abstract. Richard Feldman’s Uniqueness Thesis holds that “a body of evidence justifies at most one proposition out of a competing set of proposi- tions”. The opposing position, permissivism, allows distinct rational agents to adopt differing attitudes towards a proposition given the same body of evidence. We assess various motivations that have been offered for Uniqueness, including: concerns about achieving consensus, a strong form of evidentialism, worries about epistemically arbitrary influences on belief, a focus on truth-conduciveness, and consequences for peer disagreement. We argue that each of these motivations either misunderstands the commitments of permissivism or is question-begging. Better understanding permissivism makes it a much more plausible position
Normative Modeling
By now we are familiar with scientific models of descriptive domains. But might we also model clusters of normative truths? In this piece I first identify elements central to all modeling efforts: modeling frameworks, interpretations, and domains of applicability. Then I consider some advantages and disadvantages of normative modeling
Being More Realistic About Reasons: On Rationality and Reasons Perspectivism
This paper looks at whether it is possible to unify the
requirements of rationality with the demands of normative
reasons. It might seem impossible to do because one depends
upon the agent’s perspective and the other upon features of
the situation. Enter Reasons Perspectivism. Reasons
perspectivists think they can show that rationality does consist
in responding correctly to reasons by placing epistemic
constraints on these reasons. They think that if normative
reasons are subject to the right epistemic constraints, rational
requirements will correspond to the demands generated by
normative reasons. While this proposal is prima facie plausible,
it cannot ultimately unify reasons and rationality. There is no
epistemic constraint that can do what reasons perspectivists
would need it to do. Some constraints are too strict. The rest
are too slack. This points to a general problem with the
reasons-first program. Once we recognize that the agent’s
epistemic position helps determine what she should do, we
have to reject the idea that the features of the agent’s situation
can help determine what we should do. Either rationality
crowds out reasons and their demands or the reasons will make
unreasonable demands
Evidence: A Guide for the Uncertain
Assume that it is your evidence that determines what opinions you should have. I argue that since you should take peer disagreement seriously, evidence must have two features. (1) It must sometimes warrant being modest: uncertain what your evidence warrants, and (thus) uncertain whether you’re rational. (2) But it must always warrant being guided: disposed to treat your evidence as a guide. Surprisingly, it is very difficult to vindicate both (1) and (2). But diagnosing why this is so leads to a proposal—Trust—that is weak enough to allow modesty but strong enough to yield many guiding features. In fact, I claim that Trust is the Goldilocks principle—for it is necessary and sufficient to vindicate the claim that you should always prefer to use free evidence. Upshot: Trust lays the foundations for a theory of disagreement and, more generally, an epistemology that permits self-doubt—a modest epistemology
Unlearning What You Have Learned
Bayesian modeling techniques have proven remarkably successful at representing rational constraints on agents’ degrees of belief. Yet Frank Arntzenius’s “Shangri-La” example shows that these techniques fail for stories involving forgetting. This paper presents a formalized, expanded Bayesian modeling framework that generates intuitive verdicts about agents’ degrees of belief after losing information. The framework’s key result, called Generalized Conditionalization, yields applications like a version of Bas van Fraassen’s Reflection Principle for forgetting. These applications lead to questions about why agents should coordinate their doxastic states over time, and about the commitments an agent can make by assigning degrees of belief