227 research outputs found
Recommended from our members
Minimizing conservativity violations in ontology alignments: algorithms and evaluation
In order to enable interoperability between ontology-based systems, ontology matching techniques have been proposed. However, when the generated mappings lead to undesired logical consequences, their usefulness may be diminished. In this paper, we present an approach to detect and minimize the violations of the so-called conservativity principle where novel subsumption entailments between named concepts in one of the input ontologies are considered as unwanted. The practical applicability of the proposed approach is experimentally demonstrated on the datasets from the Ontology Alignment Evaluation Initiative
Understanding the QuickXPlain Algorithm: Simple Explanation and Formal Proof
In his seminal paper of 2004, Ulrich Junker proposed the QuickXPlain
algorithm, which provides a divide-and-conquer computation strategy to find
within a given set an irreducible subset with a particular (monotone) property.
Beside its original application in the domain of constraint satisfaction
problems, the algorithm has since then found widespread adoption in areas as
different as model-based diagnosis, recommender systems, verification, or the
Semantic Web. This popularity is due to the frequent occurrence of the problem
of finding irreducible subsets on the one hand, and to QuickXPlain's general
applicability and favorable computational complexity on the other hand.
However, although (we regularly experience) people are having a hard time
understanding QuickXPlain and seeing why it works correctly, a proof of
correctness of the algorithm has never been published. This is what we account
for in this work, by explaining QuickXPlain in a novel tried and tested way and
by presenting an intelligible formal proof of it. Apart from showing the
correctness of the algorithm and excluding the later detection of errors (proof
and trust effect), the added value of the availability of a formal proof is,
e.g., (i) that the workings of the algorithm often become completely clear only
after studying, verifying and comprehending the proof (didactic effect), (ii)
the shown proof methodology can be used as a guidance for proving other
recursive algorithms (transfer effect), and (iii) the possibility of providing
"gapless" correctness proofs of systems that rely on (results computed by)
QuickXPlain, such as numerous model-based debuggers (completeness effect)
Alignment Incoherence in Ontology Matching
Ontology matching is the process of generating alignments between ontologies. An alignment is a set of correspondences. Each correspondence links concepts and properties from one ontology to concepts and properties from another ontology. Obviously, alignments are the key component to enable integration of knowledge bases described by different ontologies. For several reasons, alignments contain often erroneous correspondences. Some of these errors can result in logical conflicts with other correspondences. In such a case the alignment is referred to as an incoherent alignment.
The relevance of alignment incoherence and strategies to resolve alignment incoherence are in the center of this thesis. After an introduction to syntax and semantics of ontologies and alignments, the importance of alignment coherence is discussed from different perspectives. On the one hand, it is argued that alignment incoherence always coincides with the incorrectness of correspondences. On the other hand, it is demonstrated that the use of incoherent alignments results in severe problems for different types of applications.
The main part of this thesis is concerned with techniques for resolving alignment incoherence, i.e., how to find a coherent subset of an incoherent alignment that has to be preferred over other coherent subsets. The underlying theory is the theory of diagnosis. In particular, two specific types of diagnoses, referred to as local optimal and global optimal diagnosis, are proposed. Computing a diagnosis is for two reasons a challenge. First, it is required to use different types of reasoning techniques to determine that an alignment is incoherent and to find subsets (conflict sets) that cause the incoherence. Second, given a set of conflict sets it is a hard problem to compute a global optimal diagnosis. In this thesis several algorithms are suggested to solve these problems in an efficient way.
In the last part of this thesis, the previously developed algorithms are applied to the scenarios of
- evaluating alignments by computing their degree of incoherence;
- repairing incoherent alignments by computing different types of diagnoses;
- selecting a coherent alignment from a rich set of matching hypotheses;
- supporting the manual revision of an incoherent alignment.
In the course of discussing the experimental results, it becomes clear that it is possible to create a coherent alignment without negative impact on the alignments quality. Moreover, results show that taking alignment incoherence into account has a positive impact on the precision of the alignment and that the proposed approach can help a human to save effort in the revision process
Reasoning-Supported Quality Assurance for Knowledge Bases
The increasing application of ontology reuse and automated knowledge acquisition tools in ontology engineering brings about a shift of development efforts from knowledge modeling towards quality assurance. Despite the high practical importance, there has been a substantial lack of support for ensuring semantic accuracy and conciseness. In this thesis, we make a significant step forward in ontology engineering by developing a support for two such essential quality assurance activities
Efficient Maximum A-Posteriori Inference in Markov Logic and Application in Description Logics
Maximum a-posteriori (MAP) query in statistical relational models computes the most probable world given evidence and further knowledge about the domain. It is arguably one of the most important types of computational problems, since it is also used as a subroutine in weight learning algorithms. In this thesis, we discuss an improved inference algorithm and an application for MAP queries. We focus on Markov logic (ML) as statistical relational formalism. Markov logic combines Markov networks with first-order logic by attaching weights to first-order formulas.
For inference, we improve existing work which translates MAP queries to integer linear programs (ILP). The motivation is that existing ILP solvers are very stable and fast and are able to precisely estimate the quality of an intermediate solution. In our work, we focus on improving the translation process such that we result in ILPs having fewer variables and fewer constraints. Our main contribution is the Cutting Plane Aggregation (CPA) approach which leverages symmetries in ML networks and parallelizes MAP inference. Additionally, we integrate the cutting plane inference (Riedel 2008) algorithm which significantly reduces the number of groundings by solving multiple smaller ILPs instead of one large ILP. We present the new Markov logic engine RockIt which outperforms state-of-the-art engines in standard Markov logic benchmarks.
Afterwards, we apply the MAP query to description logics. Description logics (DL) are knowledge representation formalisms whose expressivity is higher than propositional logic but lower than first-order logic. The most popular DLs have been standardized in the ontology language OWL and are an elementary component in the Semantic Web. We combine Markov logic, which essentially follows the semantic of a log-linear model, with description logics to log-linear description logics. In log-linear description logic weights can be attached to any description logic axiom. Furthermore, we introduce a new query type which computes the most-probable 'coherent' world. Possible applications of log-linear description logics are mainly located in the area of ontology learning and data integration. With our novel log-linear description logic reasoner ELog, we experimentally show that more expressivity increases quality and that the solutions of optimal solving strategies have higher quality than the solutions of approximate solving strategies
- …