26 research outputs found

    Model Checking CTL is Almost Always Inherently Sequential

    Get PDF
    The model checking problem for CTL is known to be P-complete (Clarke, Emerson, and Sistla (1986), see Schnoebelen (2002)). We consider fragments of CTL obtained by restricting the use of temporal modalities or the use of negations—restrictions already studied for LTL by Sistla and Clarke (1985) and Markey (2004). For all these fragments, except for the trivial case without any temporal operator, we systematically prove model checking to be either inherently sequential (P-complete) or very efficiently parallelizable (LOGCFL-complete). For most fragments, however, model checking for CTL is already P-complete. Hence our results indicate that in most applications, approaching CTL model checking by parallelism will not result in the desired speed up. We also completely determine the complexity of the model checking problem for all fragments of the extensions ECTL, CTL +, and ECTL +

    Dynamic Controllability Made Simple

    Get PDF
    Simple Temporal Networks with Uncertainty (STNUs) are a well-studied model for representing temporal constraints, where some intervals (contingent links) have an unknown but bounded duration, discovered only during execution. An STNU is dynamically controllable (DC) if there exists a strategy to execute its time-points satisfying all the constraints, regardless of the actual duration of contingent links revealed during execution. In this work we present a new system of constraint propagation rules for STNUs, which is sound-and-complete for DC checking. Our system comprises just three rules which, differently from the ones proposed in all previous works, only generate unconditioned constraints. In particular, after applying our sound rules, the network remains an STNU in all respects. Moreover, our completeness proof is short and non-algorithmic, based on the explicit construction of a valid execution strategy. This is a substantial simplification of the theory which underlies all the polynomial-time algorithms for DC-checking. Our analysis also shows: (1) the existence of late execution strategies for STNUs, (2) the equivalence of several variants of the notion of DC, (3) the existence of a fast algorithm for real-time execution of STNUs, which runs in O(KN) total time in a network with K contingent links and N time points, considerably improving the previous O(N^3)-time bound

    Explainable methods for knowledge graph refinement and exploration via symbolic reasoning

    Get PDF
    Knowledge Graphs (KGs) have applications in many domains such as Finance, Manufacturing, and Healthcare. While recent efforts have created large KGs, their content is far from complete and sometimes includes invalid statements. Therefore, it is crucial to refine the constructed KGs to enhance their coverage and accuracy via KG completion and KG validation. It is also vital to provide human-comprehensible explanations for such refinements, so that humans have trust in the KG quality. Enabling KG exploration, by search and browsing, is also essential for users to understand the KG value and limitations towards down-stream applications. However, the large size of KGs makes KG exploration very challenging. While the type taxonomy of KGs is a useful asset along these lines, it remains insufficient for deep exploration. In this dissertation we tackle the aforementioned challenges of KG refinement and KG exploration by combining logical reasoning over the KG with other techniques such as KG embedding models and text mining. Through such combination, we introduce methods that provide human-understandable output. Concretely, we introduce methods to tackle KG incompleteness by learning exception-aware rules over the existing KG. Learned rules are then used in inferring missing links in the KG accurately. Furthermore, we propose a framework for constructing human-comprehensible explanations for candidate facts from both KG and text. Extracted explanations are used to insure the validity of KG facts. Finally, to facilitate KG exploration, we introduce a method that combines KG embeddings with rule mining to compute informative entity clusters with explanations.Wissensgraphen haben viele Anwendungen in verschiedenen Bereichen, beispielsweise im Finanz- und Gesundheitswesen. Wissensgraphen sind jedoch unvollstĂ€ndig und enthalten auch ungĂŒltige Daten. Hohe Abdeckung und Korrektheit erfordern neue Methoden zur Wissensgraph-Erweiterung und Wissensgraph-Validierung. Beide Aufgaben zusammen werden als Wissensgraph-Verfeinerung bezeichnet. Ein wichtiger Aspekt dabei ist die ErklĂ€rbarkeit und VerstĂ€ndlichkeit von Wissensgraphinhalten fĂŒr Nutzer. In Anwendungen ist darĂŒber hinaus die nutzerseitige Exploration von Wissensgraphen von besonderer Bedeutung. Suchen und Navigieren im Graph hilft dem Anwender, die Wissensinhalte und ihre Limitationen besser zu verstehen. Aufgrund der riesigen Menge an vorhandenen EntitĂ€ten und Fakten ist die Wissensgraphen-Exploration eine Herausforderung. Taxonomische Typsystem helfen dabei, sind jedoch fĂŒr tiefergehende Exploration nicht ausreichend. Diese Dissertation adressiert die Herausforderungen der Wissensgraph-Verfeinerung und der Wissensgraph-Exploration durch algorithmische Inferenz ĂŒber dem Wissensgraph. Sie erweitert logisches Schlussfolgern und kombiniert es mit anderen Methoden, insbesondere mit neuronalen Wissensgraph-Einbettungen und mit Text-Mining. Diese neuen Methoden liefern Ausgaben mit ErklĂ€rungen fĂŒr Nutzer. Die Dissertation umfasst folgende BeitrĂ€ge: Insbesondere leistet die Dissertation folgende BeitrĂ€ge: ‱ Zur Wissensgraph-Erweiterung prĂ€sentieren wir ExRuL, eine Methode zur Revision von Horn-Regeln durch HinzufĂŒgen von Ausnahmebedingungen zum Rumpf der Regeln. Die erweiterten Regeln können neue Fakten inferieren und somit LĂŒcken im Wissensgraphen schließen. Experimente mit großen Wissensgraphen zeigen, dass diese Methode Fehler in abgeleiteten Fakten erheblich reduziert und nutzerfreundliche ErklĂ€rungen liefert. ‱ Mit RuLES stellen wir eine Methode zum Lernen von Regeln vor, die auf probabilistischen ReprĂ€sentationen fĂŒr fehlende Fakten basiert. Das Verfahren erweitert iterativ die aus einem Wissensgraphen induzierten Regeln, indem es neuronale Wissensgraph-Einbettungen mit Informationen aus Textkorpora kombiniert. Bei der Regelgenerierung werden neue Metriken fĂŒr die RegelqualitĂ€t verwendet. Experimente zeigen, dass RuLES die QualitĂ€t der gelernten Regeln und ihrer Vorhersagen erheblich verbessert. ‱ Zur UnterstĂŒtzung der Wissensgraph-Validierung wird ExFaKT vorgestellt, ein Framework zur Konstruktion von ErklĂ€rungen fĂŒr Faktkandidaten. Die Methode transformiert Kandidaten mit Hilfe von Regeln in eine Menge von Aussagen, die leichter zu finden und zu validieren oder widerlegen sind. Die Ausgabe von ExFaKT ist eine Menge semantischer Evidenzen fĂŒr Faktkandidaten, die aus Textkorpora und dem Wissensgraph extrahiert werden. Experimente zeigen, dass die Transformationen die Ausbeute und QualitĂ€t der entdeckten ErklĂ€rungen deutlich verbessert. Die generierten unterstĂŒtzen ErklĂ€rungen unterstĂŒtze sowohl die manuelle Wissensgraph- Validierung durch Kuratoren als auch die automatische Validierung. ‱ Zur UnterstĂŒtzung der Wissensgraph-Exploration wird ExCut vorgestellt, eine Methode zur Erzeugung von informativen EntitĂ€ts-Clustern mit ErklĂ€rungen unter Verwendung von Wissensgraph-Einbettungen und automatisch induzierten Regeln. Eine Cluster-ErklĂ€rung besteht aus einer Kombination von Relationen zwischen den EntitĂ€ten, die den Cluster identifizieren. ExCut verbessert gleichzeitig die Cluster- QualitĂ€t und die Cluster-ErklĂ€rbarkeit durch iteratives VerschrĂ€nken des Lernens von Einbettungen und Regeln. Experimente zeigen, dass ExCut Cluster von hoher QualitĂ€t berechnet und dass die Cluster-ErklĂ€rungen fĂŒr Nutzer informativ sind

    Dynamics and Promotion Triads in Meeting Destinations:<strong/>

    Get PDF

    Automated Knowledge Base Quality Assessment and Validation based on Evolution Analysis

    Get PDF
    In recent years, numerous efforts have been put towards sharing Knowledge Bases (KB) in the Linked Open Data (LOD) cloud. These KBs are being used for various tasks, including performing data analytics or building question answering systems. Such KBs evolve continuously: their data (instances) and schemas can be updated, extended, revised and refactored. However, unlike in more controlled types of knowledge bases, the evolution of KBs exposed in the LOD cloud is usually unrestrained, what may cause data to suffer from a variety of quality issues, both at a semantic level and at a pragmatic level. This situation affects negatively data stakeholders – consumers, curators, etc. –. Data quality is commonly related to the perception of the fitness for use, for a certain application or use case. Therefore, ensuring the quality of the data of a knowledge base that evolves is vital. Since data is derived from autonomous, evolving, and increasingly large data providers, it is impractical to do manual data curation, and at the same time, it is very challenging to do a continuous automatic assessment of data quality. Ensuring the quality of a KB is a non-trivial task since they are based on a combination of structured information supported by models, ontologies, and vocabularies, as well as queryable endpoints, links, and mappings. Thus, in this thesis, we explored two main areas in assessing KB quality: (i) quality assessment using KB evolution analysis, and (ii) validation using machine learning models. The evolution of a KB can be analyzed using fine-grained “change” detection at low-level or using “dynamics” of a dataset at high-level. In this thesis, we present a novel knowledge base quality assessment approach using evolution analysis. The proposed approach uses data profiling on consecutive knowledge base releases to compute quality measures that allow detecting quality issues. However, the first step in building the quality assessment approach was to identify the quality characteristics. Using high-level change detection as measurement functions, in this thesis we present four quality characteristics: Persistency, Historical Persistency, Consistency and Completeness. Persistency and historical persistency measures concern the degree of changes and lifespan of any entity type. Consistency and completeness measures identify properties with incomplete information and contradictory facts. The approach has been assessed both quantitatively and qualitatively on a series of releases from two knowledge bases, eleven releases of DBpedia and eight releases of 3cixty Nice. However, high-level changes, being coarse-grained, cannot capture all possible quality issues. In this context, we present a validation strategy whose rationale is twofold. First, using manual validation from qualitative analysis to identify causes of quality issues. Then, use RDF data profiling information to generate integrity constraints. The validation approach relies on the idea of inducing RDF shape by exploiting SHALL constraint components. In particular, this approach will learn, what are the integrity constraints that can be applied to a large KB by instructing a process of statistical analysis, which is followed by a learning model. We illustrate the performance of our validation approach by using five learning models over three sub-tasks, namely minimum cardinality, maximum cardinality, and range constraint. The techniques of quality assessment and validation developed during this work are automatic and can be applied to different knowledge bases independently of the domain. Furthermore, the measures are based on simple statistical operations that make the solution both flexible and scalable

    Usage Policies for Decentralised Information Processing

    Get PDF
    Owners impose usage restrictions on their information, which can be based e.g. on privacy laws, copyright law or social conventions. Often, information is processed in complex constellations without central control. In this work, we introduce technologies to formally express usage restrictions in a machine-interpretable way as so-called policies that enable the creation of decentralised systems that provide, consume and process distributed information in compliance with their usage restrictions

    Dwelling on ontology - semantic reasoning over topographic maps

    Get PDF
    The thesis builds upon the hypothesis that the spatial arrangement of topographic features, such as buildings, roads and other land cover parcels, indicates how land is used. The aim is to make this kind of high-level semantic information explicit within topographic data. There is an increasing need to share and use data for a wider range of purposes, and to make data more definitive, intelligent and accessible. Unfortunately, we still encounter a gap between low-level data representations and high-level concepts that typify human qualitative spatial reasoning. The thesis adopts an ontological approach to bridge this gap and to derive functional information by using standard reasoning mechanisms offered by logic-based knowledge representation formalisms. It formulates a framework for the processes involved in interpreting land use information from topographic maps. Land use is a high-level abstract concept, but it is also an observable fact intimately tied to geography. By decomposing this relationship, the thesis correlates a one-to-one mapping between high-level conceptualisations established from human knowledge and real world entities represented in the data. Based on a middle-out approach, it develops a conceptual model that incrementally links different levels of detail, and thereby derives coarser, more meaningful descriptions from more detailed ones. The thesis verifies its proposed ideas by implementing an ontology describing the land use ‘residential area’ in the ontology editor ProtĂ©gĂ©. By asserting knowledge about high-level concepts such as types of dwellings, urban blocks and residential districts as well as individuals that link directly to topographic features stored in the database, the reasoner successfully infers instances of the defined classes. Despite current technological limitations, ontologies are a promising way forward in the manner we handle and integrate geographic data, especially with respect to how humans conceptualise geographic space

    An Ontology-Based Approach for Closed-Loop Product Lifecycle Management

    Get PDF
    The main goal of the Product Lifecycle Management (PLM) is the management of all the data associated to a product during its lifecycle. Lifecycle data is being generated by events and actions (of various lifecycle agents which are humans and/or software systems) and it is distributed along the product's lifecycle phases: Beginning of Life (BOL) including design and manufacturing, Middle of Life (MOL) including usage and maintenance and End of Life (EOL) including recycling, disposal or other options. Closed-Loop PLM extends the meaning of PLM in order to close the loop of the information among the different lifecycle phases. The idea is that information of MOL could be used at the EOL stage to support deciding the most appropriate EOL option (especially to make decision for re-manufacturing and re-use) and combined with the EOL information it could be used as feedback in the BOL for improving the new generations of the product. Several PLM models have been developed utilising various technologies and methods towards providing aspects of the Closed-Loop PLM concept. Ontologies are rapidly becoming popular in various research fields. There is a tendency both in converting existing models into ontology-based models, and in creating new ontology-based models from scratch. The aim of this dissertation is to include the advantages and features provided by the ontologies into PLM models towards achieving Closed-Loop PLM. Hence, an ontology model of a Product Data and Knowledge Management Semantic Object Model for PLM has been developed. The transformation process of the model into an ontology-based one, using Web Ontology Language-Description Logic (OWL-DL), is described in detail. The background and the motives for converting existing PLM models to ontologies are also provided. The new model facilitates several of the OWL-DL capabilities, while maintaining previously achieved characteristics. Furthermore, case studies based on various application scenarios, are presented. These case studies deal with data integration and interoperability problems, in which a significant number of reasoning capabilities is implemented, and highlight the utilisation of the developed model. Moreover, in this work, a generic concept has been developed, tackling the time treatment in PLM models. Time is the only fundamental dimension which exists along the entire life of an artefact and it affects all artefacts and their qualities. Most commonly in PLM models, time is an attribute in parts such as "activities" and "events" or is a separate part of the model ("four dimensional models"). In this work the concept is that time should not be one part of the model, but it should be the basis of the model, and all other elements should be parts of it. Thus, we introduce the "Duration of Time concept". According to this concept all aspects and elements of a model are parts of time. Case studies demonstrate the applicability and the advantages of the concept in comparison to existing methodologies

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas
    corecore