21 research outputs found
Iterated Belief Change and the Levi Identity
Most works on iterated belief change have focussed on iterated belief revision, namely, on how to compute (K star x) star y. However, historically, belief revision has been defined in terms of belief expansion and belief contraction that have been viewed as primary operations. Accordingly, what we should be looking at are constructions like: (K+x)+y, (K-x)+y, (K-x)+y and (K-x)-y. The first two constructions are relatively innocuous. The last two are, however, more problematic. We look at these sequential operations. In the process, we use the Levi Identity as the guiding principle behind state changes (as opposed to belief set changes)
Trust-sensitive belief revision
Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Fermé and Hansson, and we examine its properties. In particular, we show how trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information. When multiple reporting agents are involved, we use a distance function over states to represent differing degrees of trust; this ensures that the most trusted reports will be believed
An Investigation of Darwiche and Pearl's Postulates for Iterated Belief Update
Belief revision and update, two significant types of belief change, both
focus on how an agent modify her beliefs in presence of new information. The
most striking difference between them is that the former studies the change of
beliefs in a static world while the latter concentrates on a
dynamically-changing world. The famous AGM and KM postulates were proposed to
capture rational belief revision and update, respectively. However, both of
them are too permissive to exclude some unreasonable changes in the iteration.
In response to this weakness, the DP postulates and its extensions for iterated
belief revision were presented. Furthermore, Rodrigues integrated these
postulates in belief update. Unfortunately, his approach does not meet the
basic requirement of iterated belief update. This paper is intended to solve
this problem of Rodrigues's approach. Firstly, we present a modification of the
original KM postulates based on belief states. Subsequently, we migrate several
well-known postulates for iterated belief revision to iterated belief update.
Moreover, we provide the exact semantic characterizations based on partial
preorders for each of the proposed postulates. Finally, we analyze the
compatibility between the above iterated postulates and the KM postulates for
belief update
Trust as a precursor to belief revision
Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we prove a representation result that characterizes the class of trust-sensitive revision operators in terms of a set of postulates. We also show that trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information
Trust-sensitive belief revision
Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Fermé and Hansson, and we examine its properties. In particular, we show how trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information. When multiple reporting agents are involved, we use a distance function over states to represent differing degrees of trust; this ensures that the most trusted reports will be believed