838 research outputs found
Coherent Integration of Databases by Abductive Logic Programming
We introduce an abductive method for a coherent integration of independent
data-sources. The idea is to compute a list of data-facts that should be
inserted to the amalgamated database or retracted from it in order to restore
its consistency. This method is implemented by an abductive solver, called
Asystem, that applies SLDNFA-resolution on a meta-theory that relates
different, possibly contradicting, input databases. We also give a pure
model-theoretic analysis of the possible ways to `recover' consistent data from
an inconsistent database in terms of those models of the database that exhibit
as minimal inconsistent information as reasonably possible. This allows us to
characterize the `recovered databases' in terms of the `preferred' (i.e., most
consistent) models of the theory. The outcome is an abductive-based application
that is sound and complete with respect to a corresponding model-based,
preferential semantics, and -- to the best of our knowledge -- is more
expressive (thus more general) than any other implementation of coherent
integration of databases
Reasoning in inconsistent prioritized knowledge bases: an argumentative approach
A study of query answering in prioritized ontological knowledge bases (KBs) has received attention in recent years. While several semantics of query answering have been proposed and their complexity is rather well-understood, the problem of explaining inconsistency-tolerant query answers has paid less attention. Explaining query answers permits users to understand not only what is entailed or not entailed by an inconsistent DL-LiteR KBs in the presence of priority, but also why. We, therefore, concern with the use of argumentation frameworks to allow users to better understand explanation techniques of querying answers over inconsistent DL-LiteR KBs in the presence of priority. More specifically, we propose a new variant of Dung’s argumentation frameworks, which corresponds to a given inconsistent DL-LiteR KB. We clarify a close relation between preferred subtheories adopted in such prioritized DL-LiteR setting and acceptable semantics of the corresponding argumentation framework. The significant result paves the way for applying algorithms and proof theories to establish preferred subtheories inferences in prioritized DL-LiteR KBs
Inconsistency Handling in Ontology-Mediated Query Answering: A Progress Report
International audienceThis paper accompanies an invited talk on inconsistency handling in OMQA and presents a concise summary of the research that has been conducted in the area
Complexity of Inconsistency-Tolerant Query Answering in Datalog+/- under Cardinality-Based Repairs
Querying inconsistent ontological knowledge bases is an important problem in practice, for which
several inconsistency-tolerant semantics have been proposed. In these semantics, the input database is
erroneous, and a repair is a maximally consistent database subset. Different notions of maximality (such
as subset and cardinality maximality) have been considered. In this paper, we give a precise picture of
the computational complexity of inconsistency-tolerant query answering in a wide range of Datalog+/–
languages under the cardinality-based versions of three prominent repair semantic
Introducing Preference-Based Argumentation to Inconsistent Ontological Knowledge Bases
International audienceHandling inconsistency is an inherent part of decision making in traditional agri-food chains – due to the various concerns involved. In order to explain the source of inconsistency and represent the existing conflicts in the ontological knowledge base, argumentation theory can be used. However, the current state of art methodology does not allow to take into account the level of significance of the knowledge expressed by the various ontological knowledge sources. We propose to use preferences in order to model those differences between formulas and evaluate our proposal practically by implementing it within the INRA platform and showing a use case using this formalism in a bread making decision support system
Complexity of Inconsistency-Tolerant Query Answering in Datalog+/- under Cardinality-Based Repairs
This is the author accepted manuscript. The final version is available from Association for the Advancement of Artificial Intelligence (AAAI) via the link in this recordQuerying inconsistent ontological knowledge bases is an important
problem in practice, for which several inconsistencytolerant
query answering semantics have been proposed, including
query answering relative to all repairs, relative to
the intersection of repairs, and relative to the intersection of
closed repairs. In these semantics, one assumes that the input
database is erroneous, and the notion of repair describes a
maximally consistent subset of the input database, where different
notions of maximality (such as subset and cardinality
maximality) are considered. In this paper, we give a precise
picture of the computational complexity of inconsistencytolerant
(Boolean conjunctive) query answering in a wide
range of Datalog± languages under the cardinality-based versions
of the above three repair semantics.This work was supported by the Alan
Turing Institute under the UK EPSRC grant EP/N510129/1,
and by the EPSRC grants EP/R013667/1, EP/L012138/1,
and EP/M025268/1
Complexity of Approximate Query Answering under Inconsistency in Datalog+/-
This is the author accepted manuscript. The final version is available from the publisher via the link in this recordSeveral semantics have been proposed to query inconsistent ontological
knowledge bases, including the intersection of repairs and the intersection of closed
repairs as two approximate inconsistency-tolerant semantics. In this paper, we
analyze the complexity of conjunctive query answering under these two semantics
for a wide range of Datalog± languages. We consider both the standard setting,
where errors may only be in the database, and the generalized setting, where also
the rules of a Datalog± knowledge base may be erroneous.This work was supported by The Alan Turing Institute under the
UK EPSRC grant EP/N510129/1, and by the EPSRC grants EP/R013667/1, EP/L012138/1,
and EP/M025268/1
- …