15 research outputs found

    Instance-level matching

    Get PDF
    sanchez2016aThis paper describes precisely an ontology matching technique based on the extensional definition of a class as set of instances. It first provides a general characterisation of such techniques and, in particular the need to rely on links across data sets in order to compare instances. We then detail the implication intensity measure that has been chosen. The resulting algorithm is implemented and evaluated on XLore, DBPedia, LinkedGeoData and Geospecies

    A Semantic Framework for the Analysis of Privacy Policies

    Get PDF

    Reasoning about river basins: WaWO+ revisited

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This paper characterizes part of an interdisciplinary research effort on Artificial Intelligence (AI) techniques and tools applied to Environmental Decision-Support Systems (EDSS). WaWO+ the ontology we present here, provides a set of concepts that are queried, advertised and used to support reasoning about and the management of urban water resources in complex scenarios as a River Basin. The goal of this research is to increase efficiency in Data and Knowledge interoperability and data integration among heterogeneous environmental data sources (e.g., software agents) using an explicit, machine understandable ontology to facilitate urban water resources management within a River Basin.Peer ReviewedPostprint (author's final draft

    A survey of qualitative spatial representations

    Get PDF
    Representation and reasoning with qualitative spatial relations is an important problem in artificial intelligence and has wide applications in the fields of geographic information system, computer vision, autonomous robot navigation, natural language understanding, spatial databases and so on. The reasons for this interest in using qualitative spatial relations include cognitive comprehensibility, efficiency and computational facility. This paper summarizes progress in qualitative spatial representation by describing key calculi representing different types of spatial relationships. The paper concludes with a discussion of current research and glimpse of future work

    The Application of a Semantic-Based Process Mining Framework on a Learning Process Domain

    Get PDF
    The process mining (PM) field combines techniques from computational intelligence which has been lately considered to encompass artificial intelligence (AI) or even the latter, augmented intelligence (AIs) systems, and the data mining (DM) to process modelling in order to analyze event logs. To this end, this paper presents a semantic-based process mining framework (SPMaAF) that exhibits high level of accuracy and conceptual reasoning capabilities particularly with its application in real world settings. The proposed framework proves useful towards the extraction, semantic preparation, and transformation of events log from any domain process into minable executable formats – with focus on supporting the further process of discovering, monitoring and improvement of the extracted processes through semantic-based analysis of the discovered models. Practically, the implementation of the proposed framework demonstrates the main contribution of this paper; as it presents a Semantic-Fuzzy mining approach that makes use of labels (i.e. concepts) within event logs about a domain process using a case study of the Learning Process. The paper provides a method which aims to allow for mining and improved analysis of the resulting process models through semantic – labelling (annotation), representation (ontology) and reasoning (reasoner). Consequently, the series of experimentations and semantically motivated algorithms shows that the proposed framework and its main application in real-world has the capacity of enhancing the PM results or outcomes from the syntactic to a much more abstraction levels

    A Language for Inconsistency-Tolerant Ontology Mapping

    Get PDF
    Ontology alignment plays a key role in enabling interoperability among various data sources present in the web. The nature of the world is such, that the same concepts differ in meaning, often so slightly, which makes it difficult to relate these concepts. It is the omni-present heterogeneity that is at the core of the web. The research work presented in this dissertation, is driven by the goal of providing a robust ontology alignment language for the semantic web, as we show that description logics based alignment languages are not suitable for aligning ontologies. The adoption of the semantic web technologies has been consistently on the rise over the past decade, and it continues to show promise. The core component of the semantic web is the set of knowledge representation languages -- mainly the W3C (World Wide Web Consortium) standards Web Ontology Language (OWL), Resource Description Framework (RDF), and Rule Interchange Format (RIF). While these languages have been designed in order to be suitable for the openness and extensibility of the web, they lack certain features which we try to address in this dissertation. One such missing component is the lack of non-monotonic features, in the knowledge representation languages, that enable us to perform common sense reasoning. For example, OWL supports the open world assumption (OWA), which means that knowledge about everything is assumed to be possibly incomplete at any point of time. However, experience has shown that there are situations that require us to assume that certain parts of the knowledge base are complete. Employing the Closed World Assumption (CWA) helps us achieve this. Circumscription is a very well-known approach towards CWA, which provides closed world semantics by employing the idea of minimal models with respect to certain predicates which are closed. We provide the formal semantics of the notion of Grounded Circumscription, which is an extension of circumscription with desirable properties like decidability. We also provide a tableaux calculus to reason over knowledge bases under the notion of grounded circumscription. Another form of common sense logic, is default logic. Default logic provides a way to specify rules that, by default, hold in most cases but not necessarily in all cases. The classic example of such a rule is: If something is a bird then it flies. The power of defaults comes from the ability of the logic to handle exceptions to the default rules. For example, a bird will be assumed to fly by default unless it is an exception, i.e. it belongs to a class of birds that do not fly, like penguins. Interestingly, this property of defaults can be utilized to create mappings between concepts of different ontologies (knowledge bases). We provide a new semantics for the integration of defaults in description logics and show that it improves upon previously known results in literature. In this study, we give various examples to show the utility and advantages of using a default logic based ontology alignment language. We provide the semantics and decidability results of a default based mapping language for tractable fragments of description logics (or OWL). Furthermore, we provide a proof of concept system and qualitative analysis of the results obtained from the system when compared to that of traditional mapping repair techniques

    On a notion of abduction and relevance for first-order logic clause sets

    Get PDF
    I propose techniques to help with explaining entailment and non-entailment in first-order logic respectively relying on deductive and abductive reasoning. First, given an unsatisfiable clause set, one could ask which clauses are necessary for any possible deduction (\emph{syntactically relevant}), usable for some deduction (\emph{syntactically semi-relevant}), or unusable (\emph{syntactically irrelevant}). I propose a first-order formalization of this notion and demonstrate a lifting of this notion to the explanation of an entailment w.r.t some axiom set defined in some description logic fragments. Moreover, it is accompanied by a semantic characterization via \emph{conflict literals} (contradictory simple facts). From an unsatisfiable clause set, a pair of conflict literals are always deducible. A \emph{relevant} clause is necessary to derive any conflict literal, a \emph{semi-relevant} clause is necessary to derive some conflict literal, and an \emph{irrelevant} clause is not useful in deriving any conflict literals. It helps provide a picture of why an explanation holds beyond what one can get from the predominant notion of a minimal unsatisfiable set. The need to test if a clause is (syntactically) semi-relevant leads to a generalization of a well-known resolution strategy: resolution equipped with the set-of-support strategy is refutationally complete on a clause set NN and SOS MM if and only if there is a resolution refutation from NMN\cup M using a clause in MM. This result non-trivially improves the original formulation. Second, abductive reasoning helps find extensions of a knowledge base to obtain an entailment of some missing consequence (called observation). Not only that it is useful to repair incomplete knowledge bases but also to explain a possibly unexpected observation. I particularly focus on TBox abduction in \EL description logic (still first-order logic fragment via some model-preserving translation scheme) which is rather lightweight but prevalent in practice. The solution space can be huge or even infinite. So, different kinds of minimality notions can help sort the chaff from the grain. I argue that existing ones are insufficient, and introduce \emph{connection minimality}. This criterion offers an interpretation of Occam's razor in which hypotheses are accepted only when they help acquire the entailment without arbitrarily using axioms unrelated to the problem at hand. In addition, I provide a first-order technique to compute the connection-minimal hypotheses in a sound and complete way. The key technique relies on prime implicates. While the negation of a single prime implicate can already serve as a first-order hypothesis, a connection-minimal hypothesis which follows \EL syntactic restrictions (a set of simple concept inclusions) would require a combination of them. Termination by bounding the term depth in the prime implicates is provable by only looking into the ones that are also subset-minimal. I also present an evaluation on ontologies from the medical domain by implementing a prototype with SPASS as a prime implicate generation engine.Ich schlage Techniken vor, die bei der Erklärung von Folgerung und Nichtfolgerung in der Logik erster Ordnung helfen, die sich jeweils auf deduktives und abduktives Denken stützen. Erstens könnte man bei einer gegebenen unerfüllbaren Klauselmenge fragen, welche Klauseln für eine mögliche Deduktion notwendig (\emph{syntaktisch relevant}), für eine Deduktion verwendbar (\emph{syntaktisch semi-relevant}) oder unbrauchbar (\emph{syntaktisch irrelevant}). Ich schlage eine Formalisierung erster Ordnung dieses Begriffs vor und demonstriere eine Anhebung dieses Begriffs auf die Erklärung einer Folgerung bezüglich einer Reihe von Axiomen, die in einigen Beschreibungslogikfragmenten definiert sind. Außerdem wird sie von einer semantischen Charakterisierung durch \emph{Konfliktliteral} (widersprüchliche einfache Fakten) begleitet. Aus einer unerfüllbaren Klauselmenge ist immer ein Konfliktliteralpaar ableitbar. Eine \emph{relevant}-Klausel ist notwendig, um ein Konfliktliteral abzuleiten, eine \emph{semi-relevant}-Klausel ist notwendig, um ein Konfliktliteral zu generieren, und eine \emph{irrelevant}-Klausel ist nicht nützlich, um Konfliktliterale zu generieren. Es hilft, ein Bild davon zu vermitteln, warum eine Erklärung über das hinausgeht, was man aus der vorherrschenden Vorstellung einer minimalen unerfüllbaren Menge erhalten kann. Die Notwendigkeit zu testen, ob eine Klausel (syntaktisch) semi-relevant ist, führt zu einer Verallgemeinerung einer bekannten Resolutionsstrategie: Die mit der Set-of-Support-Strategie ausgestattete Resolution ist auf einer Klauselmenge NN und SOS MM widerlegungsvollständig, genau dann wenn es eine Auflösungswiderlegung von NMN\cup M unter Verwendung einer Klausel in MM gibt. Dieses Ergebnis verbessert die ursprüngliche Formulierung nicht trivial. Zweitens hilft abduktives Denken dabei, Erweiterungen einer Wissensbasis zu finden, um eine implikantion einer fehlenden Konsequenz (Beobachtung genannt) zu erhalten. Es ist nicht nur nützlich, unvollständige Wissensbasen zu reparieren, sondern auch, um eine möglicherweise unerwartete Beobachtung zu erklären. Ich konzentriere mich besonders auf die TBox-Abduktion in dem leichten, aber praktisch vorherrschenden Fragment der Beschreibungslogik \EL, das tatsächlich ein Logikfragment erster Ordnung ist (mittels eines modellerhaltenden Übersetzungsschemas). Der Lösungsraum kann riesig oder sogar unendlich sein. So können verschiedene Arten von Minimalitätsvorstellungen helfen, die Spreu vom Weizen zu trennen. Ich behaupte, dass die bestehenden unzureichend sind, und führe \emph{Verbindungsminimalität} ein. Dieses Kriterium bietet eine Interpretation von Ockhams Rasiermesser, bei der Hypothesen nur dann akzeptiert werden, wenn sie helfen, die Konsequenz zu erlangen, ohne willkürliche Axiome zu verwenden, die nichts mit dem vorliegenden Problem zu tun haben. Außerdem stelle ich eine Technik in Logik erster Ordnung zur Berechnung der verbindungsminimalen Hypothesen in zur Verfügung korrekte und vollständige Weise. Die Schlüsseltechnik beruht auf Primimplikanten. Während die Negation eines einzelnen Primimplikant bereits als Hypothese in Logik erster Ordnung dienen kann, würde eine Hypothese des Verbindungsminimums, die den syntaktischen Einschränkungen von \EL folgt (einer Menge einfacher Konzeptinklusionen), eine Kombination dieser beiden erfordern. Die Terminierung durch Begrenzung der Termtiefe in den Primimplikanten ist beweisbar, indem nur diejenigen betrachtet werden, die auch teilmengenminimal sind. Außerdem stelle ich eine Auswertung zu Ontologien aus der Medizin vor, Domäne durch die Implementierung eines Prototyps mit SPASS als Primimplikant-Generierungs-Engine
    corecore