4 research outputs found
Uncertainty About Evidence
We develop a logical framework for reasoning about knowledge and evidence in
which the agent may be uncertain about how to interpret their evidence. Rather
than representing an evidential state as a fixed subset of the state space, our
models allow the set of possible worlds that a piece of evidence corresponds to
to vary from one possible world to another, and therefore itself be the subject
of uncertainty. Such structures can be viewed as (epistemically motivated)
generalizations of topological spaces. In this context, there arises a natural
distinction between what is actually entailed by the evidence and what the
agent knows is entailed by the evidence -- with the latter, in general, being
much weaker. We provide a sound and complete axiomatization of the
corresponding bi-modal logic of knowledge and evidence entailment, and
investigate some natural extensions of this core system, including the addition
of a belief modality and its interaction with evidence interpretation and
entailment, and the addition of a "knowability" modality interpreted via a
(generalized) interior operator.Comment: In Proceedings TARK 2019, arXiv:1907.0833
Opaque Updates
If updating with E has the same result across all epistemically possible worlds, then the agent has no uncertainty as to the behavior of the update, and we may call it a transparent update. If an agent is uncertain about the behavior of an update, we may call it opaque. In order to model the uncertainty an agent has about the result of an update, the same update must behave differently across different possible worlds. In this paper, I study opaque updates using a simple system of dynamic epistemic logic suitably modified for that purpose. The paper highlights the connection between opaque updates and the dynamic-epistemic principles Perfect-Recall and No-Miracles. I argue that opaque updates are central to contemporary discussions in epistemology, in particular to externalist theories of knowledge and to the related problem of epistemic bootstrapping, or easy knowledge. Opaque updates allow us to explicitly investigate a dynamic (or diachronic) form of uncertainty, using simple and precise logical tools