10,271 research outputs found

    Dynamic Epistemic Logic and Logical Omniscience

    Get PDF
    Epistemic logics based on the possible worlds semantics suffer from the problem of logical omniscience, whereby agents are described as knowing all logical consequences of what they know, including all tautologies. This problem is doubly challenging: on the one hand, agents should be treated as logically non-omniscient, and on the other hand, as moderately logically competent. Many responses to logical omniscience fail to meet this double challenge because the concepts of knowledge and reasoning are not properly separated. In this paper, I present a dynamic logic of knowledge that models an agent’s epistemic state as it evolves over the course of reasoning. I show that the logic does not sacrifice logical competence on the altar of logical non- omniscience

    Reasoning about Rational, but not Logically Omniscient Agents

    Get PDF
    We propose in the paper a new solution to the so-called Logical Omniscience Problem of epistemic logic. Almost all attempts in the literature to solve this problem consist in weakening the standard epistemic systems: weaker systems are considered where the agents do not possess the full reasoning capacities of ideal reasoners. We shall argue that this solution is not satisfactory: in this way omniscience can be avoided, but many intuitions about the concepts of knowledge and belief get lost. We shall show that axioms for epistemic logics must have the following form: if the agent knows all premises of a valid inference rule, and if she thinks hard enough, then she will know the conclusion. To formalize such an idea, we propose to \dynamize' epistemic logic, that is, to introduce a dynamic component into the language. We develop a logic based on this idea and show that it is suitable for formalizing the notion of actual, or explicit knowledge

    Why Philosophers Should Care About Computational Complexity

    Get PDF
    One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction, Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and beyond," MIT Press, 2012. Some minor clarifications and corrections; new references adde

    The Dynamic Epistemic Logic for Actual Knowledge

    Get PDF
    The dynamic epistemic logic for actual knowledge models the phenomenon of actual knowledge change when new information is received. In contrast to the systems of dynamic epistemic logic which have been discussed in the past literature, our system is not burdened with the problem of logical omniscience, that is, an idealized assumption that the agent explicitly knows all classical tautologies and all logical consequences of his or her knowledge. We provide a sound and complete axiomatization for this logic

    Verifying existence of resource-bounded coalition uniform strategies

    Get PDF
    We consider the problem of whether a coalition of agents has a knowledge-based strategy to ensure some outcome under a resource bound. We extend previous work on verification of multi-agent systems where actions of agents produce and consume resources, by adding epistemic pre- and postconditions to actions. This allows us to model scenarios where agents perform both actions which change the world, and actions which change their knowledge about the world, such as observation and communication. To avoid logical omniscience and obtain a compact model of the system, our model of agents’ knowledge is syntactic.We define a class of coalition-uniform strategies with respect to any (decidable) notion of coalition knowledge. We show that the model-checking problem for the resulting logic is decidable for any notion of coalition uniform strategies in these classes

    Logical Omnipotence and Two notions of Implicit Belief

    Get PDF
    The most widespread models of rational reasoners (the model based on modal epistemic logic and the model based on probability theory) exhibit the problem of logical omniscience. The most common strategy for avoiding this problem is to interpret the models as describing the explicit beliefs of an ideal reasoner, but only the implicit beliefs of a real reasoner. I argue that this strategy faces serious normative issues. In this paper, I present the more fundamental problem of logical omnipotence, which highlights the normative content of the problem of logical omniscience. I introduce two developments of the notion of implicit belief (accessible and stable belief ) and use them in two versions of the most common strategy applied to the problem of logical omnipotence

    A Dynamic Solution to the Problem of Logical Omniscience

    Get PDF
    The traditional possible-worlds model of belief describes agents as ‘logically omniscient’ in the sense that they believe all logical consequences of what they believe, including all logical truths. This is widely considered a problem if we want to reason about the epistemic lives of non-ideal agents who—much like ordinary human beings—are logically competent, but not logically omniscient. A popular strategy for avoiding logical omniscience centers around the use of impossible worlds: worlds that, in one way or another, violate the laws of logic. In this paper, we argue that existing impossible-worlds models of belief fail to describe agents who are both logically non-omniscient and logically competent. To model such agents, we argue, we need to ‘dynamize’ the impossible-worlds framework in a way that allows us to capture not only what agents believe, but also what they are able to infer from what they believe. In light of this diagnosis, we go on to develop the formal details of a dynamic impossible-worlds framework, and show that it successfully models agents who are both logically non-omniscient and logically competent

    Bayesianism for Non-ideal Agents

    Get PDF
    Orthodox Bayesianism is a highly idealized theory of how we ought to live our epistemic lives. One of the most widely discussed idealizations is that of logical omniscience: the assumption that an agent’s degrees of belief must be probabilistically coherent to be rational. It is widely agreed that this assumption is problematic if we want to reason about bounded rationality, logical learning, or other aspects of non-ideal epistemic agency. Yet, we still lack a satisfying way to avoid logical omniscience within a Bayesian framework. Some proposals merely replace logical omniscience with a different logical idealization; others sacrifice all traits of logical competence on the altar of logical non-omniscience. We think a better strategy is available: by enriching the Bayesian framework with tools that allow us to capture what agents can and cannot infer given their limited cognitive resources, we can avoid logical omniscience while retaining the idea that rational degrees of belief are in an important way constrained by the laws of probability. In this paper, we offer a formal implementation of this strategy, show how the resulting framework solves the problem of logical omniscience, and compare it to orthodox Bayesianism as we know it
    corecore