235,680 research outputs found

    Bisimulation and expressivity for conditional belief, degrees of belief, and safe belief

    Get PDF
    Plausibility models are Kripke models that agents use to reason about knowledge and belief, both of themselves and of each other. Such models are used to interpret the notions of conditional belief, degrees of belief, and safe belief. The logic of conditional belief contains that modality and also the knowledge modality, and similarly for the logic of degrees of belief and the logic of safe belief. With respect to these logics, plausibility models may contain too much information. A proper notion of bisimulation is required that characterises them. We define that notion of bisimulation and prove the required characterisations: on the class of image-finite and preimage-finite models (with respect to the plausibility relation), two pointed Kripke models are modally equivalent in either of the three logics, if and only if they are bisimilar. As a result, the information content of such a model can be similarly expressed in the logic of conditional belief, or the logic of degrees of belief, or that of safe belief. This, we found a surprising result. Still, that does not mean that the logics are equally expressive: the logics of conditional and degrees of belief are incomparable, the logics of degrees of belief and safe belief are incomparable, while the logic of safe belief is more expressive than the logic of conditional belief. In view of the result on bisimulation characterisation, this is an equally surprising result. We hope our insights may contribute to the growing community of formal epistemology and on the relation between qualitative and quantitative modelling

    Extending Dynamic Doxastic Logic: Accommodating Iterated Beliefs And Ramsey Conditionals Within DDL

    Get PDF
    In this paper we distinguish between various kinds of doxastic theories. One distinction is between informal and formal doxastic theories. AGM-type theories of belief change are of the former kind, while Hintikka’s logic of knowledge and belief is of the latter. Then we distinguish between static theories that study the unchanging beliefs of a certain agent and dynamic theories that investigate not only the constraints that can reasonably be imposed on the doxastic states of a rational agent but also rationality constraints on the changes of doxastic state that may occur in such agents. An additional distinction is that between non-introspective theories and introspective ones. Non-introspective theories investigate agents that have opinions about the external world but no higher-order opinions about their own doxasticnstates. Standard AGM-type theories as well as the currently existing versions of Segerberg’s dynamic doxastic logic (DDL) are non-introspective. Hintikka-style doxastic logic is of course introspective but it is a static theory. Thus, the challenge remains to devise doxastic theories that are both dynamic and introspective. We outline the semantics for truly introspective dynamic doxastic logic, i.e., a dynamic doxastic logic that allows us to describe agents who have both the ability to form higher-order beliefs and to reflect upon and change their minds about their own (higher-order) beliefs. This extension of DDL demands that we give up the Preservation condition on revision. We make some suggestions as to how such a non-preservative revision operation can be constructed. We also consider extending DDL with conditionals satisfying the Ramsey test and show that Gärdenfors’ well-known impossibility result applies to such a framework. Also in this case, Preservation has to be given up

    Reasoning about Knowledge in Linear Logic: Modalities and Complexity

    No full text
    In a recent paper, Jean-Yves Girard commented that ”it has been a long time since philosophy has stopped intereacting with logic”[17]. Actually, it has no

    Toward a Lockean Unification of Formal and Traditional Epistemology

    Get PDF
    Can there be knowledge and rational belief in the absence of a rational degree of confidence? Yes, and cases of "mistuned knowledge" demonstrate this. In this paper we leverage this normative possibility in support of advancing our understanding of the metaphysical relation between belief and credence. It is generally assumed that a Lockean metaphysics of belief that reduces outright belief to degrees of confidence would immediately effect a unification of coarse-grained epistemology of belief with fine-grained epistemology of confidence. Scott Sturgeon has suggested that the unification is effected by understanding the relation between outright belief and confidence as an instance of the determinable-determinate relation. But determination of belief by confidence would not by itself yield the result that norms for confidence carry over to norms for outright belief unless belief and high confidence are token identical. We argue that this token-identity thesis is incompatible with the neglected phenomenon of “mistuned knowledge”—knowledge and rational belief in the absence of rational confidence. We contend that there are genuine cases of mistuned knowledge and that, therefore, epistemological unification must forego token identity of belief and high confidence. We show how partial epistemological unification can be secured given determination of outright belief by degrees of confidence even without token-identity. Finally, we suggest a direction for the pursuit of thoroughgoing epistemological unification

    Learning and Reasoning for Robot Sequential Decision Making under Uncertainty

    Full text link
    Robots frequently face complex tasks that require more than one action, where sequential decision-making (SDM) capabilities become necessary. The key contribution of this work is a robot SDM framework, called LCORPP, that supports the simultaneous capabilities of supervised learning for passive state estimation, automated reasoning with declarative human knowledge, and planning under uncertainty toward achieving long-term goals. In particular, we use a hybrid reasoning paradigm to refine the state estimator, and provide informative priors for the probabilistic planner. In experiments, a mobile robot is tasked with estimating human intentions using their motion trajectories, declarative contextual knowledge, and human-robot interaction (dialog-based and motion-based). Results suggest that, in efficiency and accuracy, our framework performs better than its no-learning and no-reasoning counterparts in office environment.Comment: In proceedings of 34th AAAI conference on Artificial Intelligence, 202
    • …
    corecore