127 research outputs found
Debugging and repair of description logic ontologies.
Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2010.In logic-based Knowledge Representation and Reasoning (KRR), ontologies are used to
represent knowledge about a particular domain of interest in a precise way. The building
blocks of ontologies include concepts, relations and objects. Those can be combined to
form logical sentences which explicitly describe the domain. With this explicit knowledge
one can perform reasoning to derive knowledge that is implicit in the ontology. Description
Logics (DLs) are a group of knowledge representation languages with such capabilities that
are suitable to represent ontologies. The process of building ontologies has been greatly
simpli ed with the advent of graphical ontology editors such as SWOOP, Prote ge and
OntoStudio. The result of this is that there are a growing number of ontology engineers
attempting to build and develop ontologies. It is frequently the case that errors are
introduced while constructing the ontology resulting in undesirable pieces of implicit
knowledge that follows from the ontology. As such there is a need to extend current
ontology editors with tool support to aid these ontology engineers in correctly designing
and debugging their ontologies. Errors such as unsatis able concepts and inconsistent
ontologies frequently occur during ontology construction. Ontology Debugging and Repair
is concerned with helping the ontology developer to eliminate these errors from the ontology.
Much emphasis, in current tools, has been placed on giving explanations as to why these
errors occur in the ontology. Less emphasis has been placed on using this information to
suggest e cient ways to eliminate the errors. Furthermore, these tools focus mainly on the
errors of unsatis able concepts and inconsistent ontologies. In this dissertation we ll an
important gap in the area by contributing an alternative approach to ontology debugging
and repair for the more general error of a list of unwanted sentences. Errors such as
unsatis able concepts and inconsistent ontologies can be represented as unwanted sentences
in the ontology. Our approach not only considers the explanation of the unwanted sentences
but also the identi cation of repair strategies to eliminate these unwanted sentences from
the ontology
Explanation and diagnosis services for unsatisfiability and inconsistency in description logics
Description Logics (DLs) are a family of knowledge representation formalisms with formal semantics and well understood computational complexities. In recent years, they have found applications in many domains, including domain modeling, software engineering, configuration, and the Semantic Web. DLs have deeply influenced the design and standardization of the Web Ontology Language OWL. The acceptance of OWL as a web standard has reciprocally resulted in the widespread use of DL ontologies on the web. As more applications emerge with increasing complexity, non-standard reasoning services, such as explanation and diagnosis, have become important capabilities that a DL reasoner should provide. For example, unsatisfiability and inconsistency may arise in an ontology due to unintentional design defects or changes in the ontology evolution process. Without explanations, searching for the cause is like looking for a needle in a haystack. It is, therefore, surprising that most of the existing DL reasoners do not provide explanation services; they provide "Yes/No" answers to satisfiability or consistency queries without giving any reasons. This thesis presents our solution for providing explanation and diagnosis services for DL reasoners. We firstly propose a framework based on resolution to explain inconsistency and unsatisfiability in Description Logic. A sound and complete algorithm is developed to generate explanations for the DL language [Special characters omitted.] ALCHI based on the unsatisfiability and inconsistency patterns in [Special characters omitted.] ALCHI . We also develop a technique based on Shapley values to measure inconsistencies in ontologies for diagnosis purposes. This measure is used to identify which axioms in an input ontology or which parts of these axioms need to be repaired in order to make the input consistent. We also investigate optimization techniques to compute the inconsistency measures based on particular properties of DLs. Based on the above theoretical foundations, a running prototype system is implemented to evaluate the practicability of the proposed services. Our preliminary empirical results show that the resolution based explanation framework and the diagnosis procedure based on inconsistency measures can be applied in the real world application
A Logic-based Approach for Recognizing Textual Entailment Supported by Ontological Background Knowledge
We present the architecture and the evaluation of a new system for
recognizing textual entailment (RTE). In RTE we want to identify automatically
the type of a logical relation between two input texts. In particular, we are
interested in proving the existence of an entailment between them. We conceive
our system as a modular environment allowing for a high-coverage syntactic and
semantic text analysis combined with logical inference. For the syntactic and
semantic analysis we combine a deep semantic analysis with a shallow one
supported by statistical models in order to increase the quality and the
accuracy of results. For RTE we use logical inference of first-order employing
model-theoretic techniques and automated reasoning tools. The inference is
supported with problem-relevant background knowledge extracted automatically
and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or
other, more experimental sources with, e.g., manually defined presupposition
resolutions, or with axiomatized general and common sense knowledge. The
results show that fine-grained and consistent knowledge coming from diverse
sources is a necessary condition determining the correctness and traceability
of results.Comment: 25 pages, 10 figure
Reasoning by analogy in the generation of domain acceptable ontology refinements
Refinements generated for a knowledge base often involve the learning of new knowledge to be added to or replace existing parts of a knowledge base. However, the justifiability of the refinement in the context of the domain (domain acceptability) is often overlooked. The work reported in this paper describes an approach to the generation of domain acceptable refinements for incomplete and incorrect ontology individuals through reasoning by analogy using existing domain knowledge. To illustrate this approach, individuals for refinement are identified during the application of a knowledge-based system, EIRA; when EIRA fails in its task, areas of its domain ontology are identified as requiring refinement. Refinements are subsequently generated by identifying and reasoning with similar individuals from the domain ontology. To evaluate this approach EIRA has been applied to the Intensive Care Unit (ICU) domain. An evaluation (by a domain expert) of the refinements generated by EIRA has indicated that this approach successfully produces domain acceptable refinements
Completing and Debugging Ontologies: state of the art and challenges
As semantically-enabled applications require high-quality ontologies,
developing and maintaining ontologies that are as correct and complete as
possible is an important although difficult task in ontology engineering. A key
step is ontology debugging and completion. In general, there are two steps:
detecting defects and repairing defects. In this paper we discuss the state of
the art regarding the repairing step. We do this by formalizing the repairing
step as an abduction problem and situating the state of the art with respect to
this framework. We show that there are still many open research problems and
show opportunities for further work and advancing the field.Comment: 56 page
- ā¦