1,609 research outputs found

    Common learning

    Get PDF
    Consider two agents who learn the value of an unknown parameter by observing a sequence of private signals. The signals are independent and identically distributed across time but not necessarily across agents. We show that when each agent's signal space is finite, the agents will commonly learn the value of the parameter, that is, that the true value of the parameter will become approximate common knowledge. The essential step in this argument is to express the expectation of one agent's signals, conditional on those of the other agent, in terms of a Markov chain. This allows us to invoke a contraction mapping principle ensuring that if one agent's signals are close to those expected under a particular value of the parameter, then that agent expects the other agent's signals to be even closer to those expected under the parameter value. In contrast, if the agents' observations come from a countably infinite signal space, then this contraction mapping property fails. We show by example that common learning can fail in this case

    Extending the Harper Identity to Iterated Belief Change

    Get PDF
    The field of iterated belief change has focused mainly on revision, with the other main operator of AGM belief change theory, i.e. contraction, receiving relatively little attention. In this paper we extend the Harper Identity from single-step change to define iterated contraction in terms of iterated revision. Specifically, just as the Harper Identity provides a recipe for defining the belief set resulting from contracting A in terms of (i) the initial belief set and (ii) the belief set resulting from revision by ÂŹA, we look at ways to define the plausibility ordering over worlds resulting from contracting A in terms of (iii) the initial plausibility ordering, and (iv) the plausibility ordering resulting from revision by ÂŹA. After noting that the most straightforward such extension leads to a trivialisation of the space of permissible orderings, we provide a family of operators for combining plausibility orderings that avoid such a result. These operators are characterised in our domain of interest by a pair of intuitively compelling properties, which turn out to enable the derivation of a number of iterated contraction postulates from postulates for iterated revision. We finish by observing that a salient member of this family allows for the derivation of counterparts for contraction of some well known iterated revision operators, as well as for defining new iterated contraction operators

    Common Learning

    Get PDF
    Consider two agents who learn the value of an unknown parameter by observing a sequence of private signals. The signals are independent and identically distributed across time but not necessarily across agents. We show that that when each agent's signal space is finite, the agents will commonly learn its value, i.e., that the true value of the parameter will become approximate common-knowledge. In contrast, if the agents' observations come from a countably infinite signal space, then this contraction mapping property fails. We show by example that common learning can fail in this case.Common learning, Common belief, Private signals, Private beliefs

    Common Learning

    Get PDF
    Consider two agents who learn the value of an unknown parameter by observing a sequence of private signals. The signals are independent and identically distributed across time but not necessarily across agents. We show that that when each agent's signal space is finite, the agents will commonly learn its value, i.e., that the true value of the parameter will become approximate common-knowledge. In contrast, if the agents' observations come from a countably infinite signal space, then this contraction mapping property fails. We show by example that common learning can fail in this case.Common learning, common belief, private signals, private beliefs

    Decrement Operators in Belief Change

    Full text link
    While research on iterated revision is predominant in the field of iterated belief change, the class of iterated contraction operators received more attention in recent years. In this article, we examine a non-prioritized generalisation of iterated contraction. In particular, the class of weak decrement operators is introduced, which are operators that by multiple steps achieve the same as a contraction. Inspired by Darwiche and Pearl's work on iterated revision the subclass of decrement operators is defined. For both, decrement and weak decrement operators, postulates are presented and for each of them a representation theorem in the framework of total preorders is given. Furthermore, we present two sub-types of decrement operators
    • …
    corecore