20,345 research outputs found

    Disagreement and easy bootstrapping

    Get PDF
    ABSTRACTShould conciliating with disagreeing peers be considered sufficient for reaching rational beliefs? Thomas Kelly argues that when taken this way, Conciliationism lets those who enter into a disagreement with an irrational belief reach a rational belief all too easily. Three kinds of responses defending Conciliationism are found in the literature. One response has it that conciliation is required only of agents who have a rational belief as they enter into a disagreement. This response yields a requirement that no one should follow. If the need to conciliate applies only to already rational agents, then an agent must conciliate only when her peer is the one irrational. A second response views conciliation as merely necessary for having a rational belief. This alone does little to address the central question of what is rational to believe when facing a disagreeing peer. Attempts to develop the response either reduce to the first response, or deem necessary an unnecessary doxastic revision, or imply that rational dilemmas obtain in cases where intuitively there are none. A third response tells us to weigh what our pre-disagreement evidence supports against the evidence from the disagreement itself. This invites epistemic akrasia

    Is higher-order evidence evidence?

    Get PDF
    Suppose we learn that we have a poor track record in forming beliefs rationally, or that a brilliant colleague thinks that we believe P irrationally. Does such input require us to revise those beliefs whose rationality is in question? When we gain information suggesting that our beliefs are irrational, we are in one of two general cases. In the first case we made no error, and our beliefs are rational. In that case the input to the contrary is misleading. In the second case we indeed believe irrationally, and our original evidence already requires us to fix our mistake. In that case the input to that effect is normatively superfluous. Thus, we know that information suggesting that our beliefs are irrational is either misleading or superfluous. This, I submit, renders the input incapable of justifying belief revision, despite our not knowing which of the two kinds it is
    corecore