217,348 research outputs found

    A set-based reasoner for the description logic \shdlssx (Extended Version)

    Full text link
    We present a \ke-based implementation of a reasoner for a decidable fragment of (stratified) set theory expressing the description logic \dlssx (\shdlssx, for short). Our application solves the main TBox and ABox reasoning problems for \shdlssx. In particular, it solves the consistency problem for \shdlssx-knowledge bases represented in set-theoretic terms, and a generalization of the \emph{Conjunctive Query Answering} problem in which conjunctive queries with variables of three sorts are admitted. The reasoner, which extends and optimizes a previous prototype for the consistency checking of \shdlssx-knowledge bases (see \cite{cilc17}), is implemented in \textsf{C++}. It supports \shdlssx-knowledge bases serialized in the OWL/XML format, and it admits also rules expressed in SWRL (Semantic Web Rule Language).Comment: arXiv admin note: text overlap with arXiv:1804.11222, arXiv:1707.07545, arXiv:1702.0309

    Web ontology representation and reasoning via fragments of set theory

    Full text link
    In this paper we use results from Computable Set Theory as a means to represent and reason about description logics and rule languages for the semantic web. Specifically, we introduce the description logic \mathcal{DL}\langle 4LQS^R\rangle(\D)--admitting features such as min/max cardinality constructs on the left-hand/right-hand side of inclusion axioms, role chain axioms, and datatypes--which turns out to be quite expressive if compared with \mathcal{SROIQ}(\D), the description logic underpinning the Web Ontology Language OWL. Then we show that the consistency problem for \mathcal{DL}\langle 4LQS^R\rangle(\D)-knowledge bases is decidable by reducing it, through a suitable translation process, to the satisfiability problem of the stratified fragment 4LQSR4LQS^R of set theory, involving variables of four sorts and a restricted form of quantification. We prove also that, under suitable not very restrictive constraints, the consistency problem for \mathcal{DL}\langle 4LQS^R\rangle(\D)-knowledge bases is \textbf{NP}-complete. Finally, we provide a 4LQSR4LQS^R-translation of rules belonging to the Semantic Web Rule Language (SWRL)

    Correcting Knowledge Base Assertions

    Get PDF
    The usefulness and usability of knowledge bases (KBs) is often limited by quality issues. One common issue is the presence of erroneous assertions, often caused by lexical or semantic confusion. We study the problem of correcting such assertions, and present a general correction framework which combines lexical matching, semantic embedding, soft constraint mining and semantic consistency checking. The framework is evaluated using DBpedia and an enterprise medical KB

    Effectiveness Analysis of Knowledge Bases

    Get PDF
    Knowledge base systems (expert systems) are entering a critical stage as interest spreads from university research to practical applications. If knowledge base systems are to withstand this transition, special attention must be paid to checking their effectiveness. The issue of effectiveness analysis of knowledge base systems has been largely ignored and few works have been published in this field. This dissertation shows how the effectiveness of a knowledge base system can be defined, discussed and analyzed at the knowledge base system level and the knowledge base level. We characterize the effectiveness of a knowledge base system in terms of minimality, termination, completeness and consistency. In order to resolve these issues, we propose a general framework for checking these properties. This framework includes models KBS and KB for knowledge base systems and knowledge bases, respectively. These models provide an environment in which we can discuss and analyze the effectiveness of a knowledge base system. This framework leads to analysis for rule set properties and a set of problem formulations for each of the following: minimality, termination, completeness and consistency. In this dissertation, we have designed a set of algorithms for resolving these problems and given some computational results

    Data-graph repairs: the preferred approach

    Full text link
    Repairing inconsistent knowledge bases is a task that has been assessed, with great advances over several decades, from within the knowledge representation and reasoning and the database theory communities. As information becomes more complex and interconnected, new types of repositories, representation languages and semantics are developed in order to be able to query and reason about it. Graph databases provide an effective way to represent relationships among data, and allow processing and querying these connections efficiently. In this work, we focus on the problem of computing preferred (subset and superset) repairs for graph databases with data values, using a notion of consistency based on a set of Reg-GXPath expressions as integrity constraints. Specifically, we study the problem of computing preferred repairs based on two different preference criteria, one based on weights and the other based on multisets, showing that in most cases it is possible to retain the same computational complexity as in the case where no preference criterion is available for exploitation.Comment: arXiv admin note: text overlap with arXiv:2206.0750

    Prioritized base Debugging in Description Logics

    Get PDF
    International audienceThe problem investigated is the identification within an input knowledge base of axioms which should be preferably discarded (or amended) in order to restore consistency, coherence, or get rid of undesired consequences. Most existing strategies for this task in Description Logics rely on conflicts, either computing all minimal conflicts beforehand, or generating conflicts on demand, using diagnosis. The article studies how prioritized base revision can be effectively applied in the former case. The first main contribution is the observation that for each axiom appearing in a minimal conflict, two bases can be obtained for a negligible cost, representing what part of the input knowledge must be preserved if this axiom is discarded or retained respectively, and which may serve as a basis to obtain a semantically motivated preference relation over these axioms. The second main contributions is an algorithm which, assuming this preference relation is known, selects some of the maximal consistent/coherent subsets of the input knowledge base accordingly, without the need to compute all of of them

    Personalizable Knowledge Integration

    Get PDF
    Large repositories of data are used daily as knowledge bases (KBs) feeding computer systems that support decision making processes, such as in medical or financial applications. Unfortunately, the larger a KB is, the harder it is to ensure its consistency and completeness. The problem of handling KBs of this kind has been studied in the AI and databases communities, but most approaches focus on computing answers locally to the KB, assuming there is some single, epistemically correct solution. It is important to recognize that for some applications, as part of the decision making process, users consider far more knowledge than that which is contained in the knowledge base, and that sometimes inconsistent data may help in directing reasoning; for instance, inconsistency in taxpayer records can serve as evidence of a possible fraud. Thus, the handling of this type of data needs to be context-sensitive, creating a synergy with the user in order to build useful, flexible data management systems. Inconsistent and incomplete information is ubiquitous and presents a substantial problem when trying to reason about the data: how can we derive an adequate model of the world, from the point of view of a given user, from a KB that may be inconsistent or incomplete? In this thesis we argue that in many cases users need to bring their application-specific knowledge to bear in order to inform the data management process. Therefore, we provide different approaches to handle, in a personalized fashion, some of the most common issues that arise in knowledge management. Specifically, we focus on (1) inconsistency management in relational databases, general knowledge bases, and a special kind of knowledge base designed for news reports; (2) management of incomplete information in the form of different types of null values; and (3) answering queries in the presence of uncertain schema matchings. We allow users to define policies to manage both inconsistent and incomplete information in their application in a way that takes both the user's knowledge of his problem, and his attitude to error/risk, into account. Using the frameworks and tools proposed here, users can specify when and how they want to manage/solve the issues that arise due to inconsistency and incompleteness in their data, in the way that best suits their needs

    On the cost-complexity of multi-context systems

    Full text link
    Multi-context systems provide a powerful framework for modelling information-aggregation systems featuring heterogeneous reasoning components. Their execution can, however, incur non-negligible cost. Here, we focus on cost-complexity of such systems. To that end, we introduce cost-aware multi-context systems, an extension of non-monotonic multi-context systems framework taking into account costs incurred by execution of semantic operators of the individual contexts. We formulate the notion of cost-complexity for consistency and reasoning problems in MCSs. Subsequently, we provide a series of results related to gradually more and more constrained classes of MCSs and finally introduce an incremental cost-reducing algorithm solving the reasoning problem for definite MCSs

    Conjunctive Query Answering for the Description Logic SHIQ

    Full text link
    Conjunctive queries play an important role as an expressive query language for Description Logics (DLs). Although modern DLs usually provide for transitive roles, conjunctive query answering over DL knowledge bases is only poorly understood if transitive roles are admitted in the query. In this paper, we consider unions of conjunctive queries over knowledge bases formulated in the prominent DL SHIQ and allow transitive roles in both the query and the knowledge base. We show decidability of query answering in this setting and establish two tight complexity bounds: regarding combined complexity, we prove that there is a deterministic algorithm for query answering that needs time single exponential in the size of the KB and double exponential in the size of the query, which is optimal. Regarding data complexity, we prove containment in co-NP
    • …
    corecore