2 research outputs found

    Localising iceberg inconsistencies

    Get PDF
    In artificial intelligence, it is important to handle and analyse inconsistency in knowledge bases. Inconsistent pieces of information suggest questions like “where is the inconsistency?” and “how severe is it?”. Inconsistency measures have been proposed to tackle the latter issue, but the former seems underdeveloped and is the focus of this paper. Minimal inconsistent sets have been the main tool to localise inconsistency, but we argue that they are like the exposed part of an iceberg, failing to capture contradictions hidden under the water. Using classical propositional logic, we develop methods to characterise when a formula is contributing to the inconsistency in a knowledge base and when a set of formulas can be regarded as a primitive conflict. To achieve this, we employ an abstract consequence operation to “look beneath the water level”, generalising the minimal inconsistent set concept and the related free formula notion. We apply the framework presented to the problem of measuring inconsistency in knowledge bases, putting forward relaxed forms for two debatable postulates for inconsistency measures. Finally, we discuss the computational complexity issues related to the introduced concepts

    On Measuring Inconsistency Using Maximal Consistent Sets

    No full text
    International audienceAn important problem in knowledge-based systems is inconsistency handling. This problem has recently been attracting a lot of attention in AI community. In this paper, we tackle the problem of evaluating the amount of conflicts in knowledge bases, and provide a new fine grained inconsistency measure, denoted MCSC, based on maximal consistent sets. In particular, it is suitable in systems where inconsistency results from multiple consistent sources. We show that our measure satisfies several rational postulates proposed in the literature. Moreover, we provide an encoding in integer linear programming for computing MCSC
    corecore