215,469 research outputs found
Belief revision by examples
A common assumption in belief revision is that the reliability of the
information sources is either given, derived from temporal information, or the
same for all. This article does not describe a new semantics for integration
but the problem of obtaining the reliability of the sources given the result of
a previous merging. As an example, the relative reliability of two sensors can
be assessed given some certain observation, and allows for subsequent mergings
of data coming from them
Belief Revision, Minimal Change and Relaxation: A General Framework based on Satisfaction Systems, and Applications to Description Logics
Belief revision of knowledge bases represented by a set of sentences in a
given logic has been extensively studied but for specific logics, mainly
propositional, and also recently Horn and description logics. Here, we propose
to generalize this operation from a model-theoretic point of view, by defining
revision in an abstract model theory known under the name of satisfaction
systems. In this framework, we generalize to any satisfaction systems the
characterization of the well known AGM postulates given by Katsuno and
Mendelzon for propositional logic in terms of minimal change among
interpretations. Moreover, we study how to define revision, satisfying the AGM
postulates, from relaxation notions that have been first introduced in
description logics to define dissimilarity measures between concepts, and the
consequence of which is to relax the set of models of the old belief until it
becomes consistent with the new pieces of knowledge. We show how the proposed
general framework can be instantiated in different logics such as
propositional, first-order, description and Horn logics. In particular for
description logics, we introduce several concrete relaxation operators tailored
for the description logic \ALC{} and its fragments \EL{} and \ELext{},
discuss their properties and provide some illustrative examples
Extended PCR Rules for Dynamic Frames
In most of classical fusion problems modeled from belief functions, the frame of discernment is considered as static. This means that the set of elements in the frame and the underlying integrity constraints of the frame are fixed forever and they do not change with time. In some applications, like in target tracking for example, the use of such invariant frame is not very appropriate because it can truly change with time. So it is necessary to adapt the Proportional Conflict Redistribution fusion rules (PCR5 and PCR6) for working with dynamical frames. In this paper, we propose an extension of PCR5 and PCR6 rules for working in a frame having some non-existential integrity constraints. Such constraints on the frame can arise in tracking applications by the destruction of targets for example. We show through very simple examples how these new rules can be used for the belief revision process
Learning Probabilities: Towards a Logic of Statistical Learning
We propose a new model for forming beliefs and learning about unknown
probabilities (such as the probability of picking a red marble from a bag with
an unknown distribution of coloured marbles). The most widespread model for
such situations of 'radical uncertainty' is in terms of imprecise
probabilities, i.e. representing the agent's knowledge as a set of probability
measures. We add to this model a plausibility map, associating to each measure
a plausibility number, as a way to go beyond what is known with certainty and
represent the agent's beliefs about probability. There are a number of standard
examples: Shannon Entropy, Centre of Mass etc. We then consider learning of two
types of information: (1) learning by repeated sampling from the unknown
distribution (e.g. picking marbles from the bag); and (2) learning higher-order
information about the distribution (in the shape of linear inequalities, e.g.
we are told there are more red marbles than green marbles). The first changes
only the plausibility map (via a 'plausibilistic' version of Bayes' Rule), but
leaves the given set of measures unchanged; the second shrinks the set of
measures, without changing their plausibility. Beliefs are defined as in Belief
Revision Theory, in terms of truth in the most plausible worlds. But our belief
change does not comply with standard AGM axioms, since the revision induced by
(1) is of a non-AGM type. This is essential, as it allows our agents to learn
the true probability: we prove that the beliefs obtained by repeated sampling
converge almost surely to the correct belief (in the true probability). We end
by sketching the contours of a dynamic doxastic logic for statistical learning.Comment: In Proceedings TARK 2019, arXiv:1907.0833
Non prioritized reasoning in intelligent agents
The design of intelligent agents is greatly influenced by the many different models that exist to represent knowledge. It is essential that such agents have computationally adequate mechanisms to manage its knowledge, which more often than not is incomplete and/or inconsistent. It is also important for an agent to be able to obtain new conclusions that allow it to reason about the state of the world in which it is embedded.
It has been proven that this problem cannot be solved within the realm of Classic Logic.
This situation has triggered the development of a series of logical formalisms that extend the classic ones. These proposals often carry the names of Nonmonotonic Reasoning, or Defeasible Reasoning. Some examples of such models are McDermott and Doyle’s Nonmonotonic Logics, Reiter’s Default Logic, Moore’s Autoepistemic Logic, McCarthy’s Circumscription Model, and Belief Revision (also called Belief Change). This last formalism was introduced by G¨ardenfors and later extended by Alchourrón, G¨ardenfors, and Makinson [1, 4]Eje: Inteligencia artificialRed de Universidades con Carreras en Informática (RedUNCI
- …