3,945 research outputs found

    Ideal rationality and logical omniscience

    Get PDF
    Does rationality require logical omniscience? Our best formal theories of rationality imply that it does, but our ordinary evaluations of rationality seem to suggest otherwise. This paper aims to resolve the tension by arguing that our ordinary evaluations of rationality are not only consistent with the thesis that rationality requires logical omniscience, but also provide a compelling rationale for accepting this thesis in the first place. This paper also defends an account of apriori justification for logical beliefs that is designed to explain the rational requirement of logical omniscience. On this account, apriori justification for beliefs about logic has its source in logical facts, rather than psychological facts about experience, reasoning, or understanding. This account has important consequences for the epistemic role of experience in the logical domain. In a slogan, the epistemic role of experience in the apriori domain is not a justifying role, but rather an enabling and disabling rol

    Bayesianism for Non-ideal Agents

    Get PDF
    Orthodox Bayesianism is a highly idealized theory of how we ought to live our epistemic lives. One of the most widely discussed idealizations is that of logical omniscience: the assumption that an agent’s degrees of belief must be probabilistically coherent to be rational. It is widely agreed that this assumption is problematic if we want to reason about bounded rationality, logical learning, or other aspects of non-ideal epistemic agency. Yet, we still lack a satisfying way to avoid logical omniscience within a Bayesian framework. Some proposals merely replace logical omniscience with a different logical idealization; others sacrifice all traits of logical competence on the altar of logical non-omniscience. We think a better strategy is available: by enriching the Bayesian framework with tools that allow us to capture what agents can and cannot infer given their limited cognitive resources, we can avoid logical omniscience while retaining the idea that rational degrees of belief are in an important way constrained by the laws of probability. In this paper, we offer a formal implementation of this strategy, show how the resulting framework solves the problem of logical omniscience, and compare it to orthodox Bayesianism as we know it

    Higher-Order Evidence and the Normativity of Logic

    Get PDF
    Many theories of rational belief give a special place to logic. They say that an ideally rational agent would never be uncertain about logical facts. In short: they say that ideal rationality requires "logical omniscience." Here I argue against the view that ideal rationality requires logical omniscience on the grounds that the requirement of logical omniscience can come into conflict with the requirement to proportion one’s beliefs to the evidence. I proceed in two steps. First, I rehearse an influential line of argument from the "higher-order evidence" debate, which purports to show that it would be dogmatic, even for a cognitively infallible agent, to refuse to revise her beliefs about logical matters in response to evidence indicating that those beliefs are irrational. Second, I defend this "anti-dogmatism" argument against two responses put forth by Declan Smithies and David Christensen. Against Smithies’ response, I argue that it leads to irrational self-ascriptions of epistemic luck, and that it obscures the distinction between propositional and doxastic justification. Against Christensen’s response, I argue that it clashes with one of two attractive deontic principles, and that it is extensionally inadequate. Taken together, these criticisms will suggest that the connection between logic and rationality cannot be what it is standardly taken to be—ideal rationality does not require logical omniscience

    Logical ignorance and logical learning

    Get PDF
    According to certain normative theories in epistemology, rationality requires us to be logically omniscient. Yet this prescription clashes with our ordinary judgments of rationality. How should we resolve this tension? In this paper, I focus particularly on the logical omniscience requirement in Bayesian epistemology. Building on a key insight by Hacking :311–325, 1967), I develop a version of Bayesianism that permits logical ignorance. This includes: an account of the synchronic norms that govern a logically ignorant individual at any given time; an account of how we reduce our logical ignorance by learning logical facts and how we should update our credences in response to such evidence; and an account of when logical ignorance is irrational and when it isn’t. At the end, I explain why the requirement of logical omniscience remains true of ideal agents with no computational, processing, or storage limitations

    Logical Omnipotence and Two notions of Implicit Belief

    Get PDF
    The most widespread models of rational reasoners (the model based on modal epistemic logic and the model based on probability theory) exhibit the problem of logical omniscience. The most common strategy for avoiding this problem is to interpret the models as describing the explicit beliefs of an ideal reasoner, but only the implicit beliefs of a real reasoner. I argue that this strategy faces serious normative issues. In this paper, I present the more fundamental problem of logical omnipotence, which highlights the normative content of the problem of logical omniscience. I introduce two developments of the notion of implicit belief (accessible and stable belief ) and use them in two versions of the most common strategy applied to the problem of logical omnipotence

    Homo Sapiens Sapiens Meets Homo Strategicus at the Laboratory

    Get PDF
    Homo Strategicus populates the vast plains of Game Theory. He knows all logical implications of his knowledge (logical omniscience) and chooses optimal strategies given his knowledge and beliefs (rationality). This paper investigates the extent to which the logical capabilities of Homo Sapiens Sapiens resemble those possessed by Homo Strategicus. Controlling for other-regarding preferences and beliefs about the rationality of others, we show, in the laboratory, that the ability of Homo Sapiens Sapiens to perform complex chains of iterative reasoning is much better than previously thought. Subjects were able to perform about two to three iterations of reasoning on average.iterative reasoning; depth of reasoning; logical omniscience; rationality; experiments; other-regarding preferences

    A Dynamic Solution to the Problem of Logical Omniscience

    Get PDF
    The traditional possible-worlds model of belief describes agents as ‘logically omniscient’ in the sense that they believe all logical consequences of what they believe, including all logical truths. This is widely considered a problem if we want to reason about the epistemic lives of non-ideal agents who—much like ordinary human beings—are logically competent, but not logically omniscient. A popular strategy for avoiding logical omniscience centers around the use of impossible worlds: worlds that, in one way or another, violate the laws of logic. In this paper, we argue that existing impossible-worlds models of belief fail to describe agents who are both logically non-omniscient and logically competent. To model such agents, we argue, we need to ‘dynamize’ the impossible-worlds framework in a way that allows us to capture not only what agents believe, but also what they are able to infer from what they believe. In light of this diagnosis, we go on to develop the formal details of a dynamic impossible-worlds framework, and show that it successfully models agents who are both logically non-omniscient and logically competent

    Evidential Probabilities and Credences

    Get PDF
    Enjoying great popularity in decision theory, epistemology, and philosophy of science, Bayesianism as understood here is fundamentally concerned with epistemically ideal rationality. It assumes a tight connection between evidential probability and ideally rational credence, and usually interprets evidential probability in terms of such credence. Timothy Williamson challenges Bayesianism by arguing that evidential probabilities cannot be adequately interpreted as the credences of an ideal agent. From this and his assumption that evidential probabilities cannot be interpreted as the actual credences of human agents either, he concludes that no interpretation of evidential probabilities in terms of credence is adequate. I argue to the contrary. My overarching aim is to show on behalf of Bayesians how one can still interpret evidential probabilities in terms of ideally rational credence and how one can maintain a tight connection between evidential probabilities and ideally rational credence even if the former cannot be interpreted in terms of the latter. By achieving this aim I illuminate the limits and prospects of Bayesianism

    Belief and Self‐Knowledge: Lessons From Moore's Paradox

    Get PDF
    The aim of this paper is to argue that what I call the simple theory of introspection can be extended to account for our introspective knowledge of what we believe as well as what we consciously experience. In section one, I present the simple theory of introspection and motivate the extension from experience to belief. In section two, I argue that extending the simple theory provides a solution to Moore’s paradox by explaining why believing Moorean conjunctions always involves some degree of irrationality. In section three, I argue that it also solves the puzzle of transparency by explaining why it’s rational to answer the question whether one believes that p by answering the question whether p. Finally, in section four, I defend the simple theory against objections by arguing that self-knowledge constitutes an ideal of rationality
    • 

    corecore