303 research outputs found

    PriCL: Creating a Precedent A Framework for Reasoning about Privacy Case Law

    Full text link
    We introduce PriCL: the first framework for expressing and automatically reasoning about privacy case law by means of precedent. PriCL is parametric in an underlying logic for expressing world properties, and provides support for court decisions, their justification, the circumstances in which the justification applies as well as court hierarchies. Moreover, the framework offers a tight connection between privacy case law and the notion of norms that underlies existing rule-based privacy research. In terms of automation, we identify the major reasoning tasks for privacy cases such as deducing legal permissions or extracting norms. For solving these tasks, we provide generic algorithms that have particularly efficient realizations within an expressive underlying logic. Finally, we derive a definition of deducibility based on legal concepts and subsequently propose an equivalent characterization in terms of logic satisfiability.Comment: Extended versio

    Discovering Implicational Knowledge in Wikidata

    Full text link
    Knowledge graphs have recently become the state-of-the-art tool for representing the diverse and complex knowledge of the world. Examples include the proprietary knowledge graphs of companies such as Google, Facebook, IBM, or Microsoft, but also freely available ones such as YAGO, DBpedia, and Wikidata. A distinguishing feature of Wikidata is that the knowledge is collaboratively edited and curated. While this greatly enhances the scope of Wikidata, it also makes it impossible for a single individual to grasp complex connections between properties or understand the global impact of edits in the graph. We apply Formal Concept Analysis to efficiently identify comprehensible implications that are implicitly present in the data. Although the complex structure of data modelling in Wikidata is not amenable to a direct approach, we overcome this limitation by extracting contextual representations of parts of Wikidata in a systematic fashion. We demonstrate the practical feasibility of our approach through several experiments and show that the results may lead to the discovery of interesting implicational knowledge. Besides providing a method for obtaining large real-world data sets for FCA, we sketch potential applications in offering semantic assistance for editing and curating Wikidata

    Query Answering in Probabilistic Data and Knowledge Bases

    Get PDF
    Probabilistic data and knowledge bases are becoming increasingly important in academia and industry. They are continuously extended with new data, powered by modern information extraction tools that associate probabilities with knowledge base facts. The state of the art to store and process such data is founded on probabilistic database systems, which are widely and successfully employed. Beyond all the success stories, however, such systems still lack the fundamental machinery to convey some of the valuable knowledge hidden in them to the end user, which limits their potential applications in practice. In particular, in their classical form, such systems are typically based on strong, unrealistic limitations, such as the closed-world assumption, the closed-domain assumption, the tuple-independence assumption, and the lack of commonsense knowledge. These limitations do not only lead to unwanted consequences, but also put such systems on weak footing in important tasks, querying answering being a very central one. In this thesis, we enhance probabilistic data and knowledge bases with more realistic data models, thereby allowing for better means for querying them. Building on the long endeavor of unifying logic and probability, we develop different rigorous semantics for probabilistic data and knowledge bases, analyze their computational properties and identify sources of (in)tractability and design practical scalable query answering algorithms whenever possible. To achieve this, the current work brings together some recent paradigms from logics, probabilistic inference, and database theory

    Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis

    Get PDF
    Description Logic (abbrv. DL) belongs to the field of knowledge representation and reasoning. DL researchers have developed a large family of logic-based languages, so-called description logics (abbrv. DLs). These logics allow their users to explicitly represent knowledge as ontologies, which are finite sets of (human- and machine-readable) axioms, and provide them with automated inference services to derive implicit knowledge. The landscape of decidability and computational complexity of common reasoning tasks for various description logics has been explored in large parts: there is always a trade-off between expressibility and reasoning costs. It is therefore not surprising that DLs are nowadays applied in a large variety of domains: agriculture, astronomy, biology, defense, education, energy management, geography, geoscience, medicine, oceanography, and oil and gas. Furthermore, the most notable success of DLs is that these constitute the logical underpinning of the Web Ontology Language (abbrv. OWL) in the Semantic Web. Formal Concept Analysis (abbrv. FCA) is a subfield of lattice theory that allows to analyze data-sets that can be represented as formal contexts. Put simply, such a formal context binds a set of objects to a set of attributes by specifying which objects have which attributes. There are two major techniques that can be applied in various ways for purposes of conceptual clustering, data mining, machine learning, knowledge management, knowledge visualization, etc. On the one hand, it is possible to describe the hierarchical structure of such a data-set in form of a formal concept lattice. On the other hand, the theory of implications (dependencies between attributes) valid in a given formal context can be axiomatized in a sound and complete manner by the so-called canonical base, which furthermore contains a minimal number of implications w.r.t. the properties of soundness and completeness. In spite of the different notions used in FCA and in DLs, there has been a very fruitful interaction between these two research areas. My thesis continues this line of research and, more specifically, I will describe how methods from FCA can be used to support the automatic construction and extension of DL ontologies from data

    Reasoning-Supported Quality Assurance for Knowledge Bases

    Get PDF
    The increasing application of ontology reuse and automated knowledge acquisition tools in ontology engineering brings about a shift of development efforts from knowledge modeling towards quality assurance. Despite the high practical importance, there has been a substantial lack of support for ensuring semantic accuracy and conciseness. In this thesis, we make a significant step forward in ontology engineering by developing a support for two such essential quality assurance activities

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions

    Learning Possibilistic Logic Theories

    Get PDF
    Vi tar opp problemet med å lære tolkbare maskinlæringsmodeller fra usikker og manglende informasjon. Vi utvikler først en ny dyplæringsarkitektur, RIDDLE: Rule InDuction with Deep LEarning (regelinduksjon med dyp læring), basert på egenskapene til mulighetsteori. Med eksperimentelle resultater og sammenligning med FURIA, en eksisterende moderne metode for regelinduksjon, er RIDDLE en lovende regelinduksjonsalgoritme for å finne regler fra data. Deretter undersøker vi læringsoppgaven formelt ved å identifisere regler med konfidensgrad knyttet til dem i exact learning-modellen. Vi definerer formelt teoretiske rammer og viser forhold som må holde for å garantere at en læringsalgoritme vil identifisere reglene som holder i et domene. Til slutt utvikler vi en algoritme som lærer regler med tilhørende konfidensverdier i exact learning-modellen. Vi foreslår også en teknikk for å simulere spørringer i exact learning-modellen fra data. Eksperimenter viser oppmuntrende resultater for å lære et sett med regler som tilnærmer reglene som er kodet i data.We address the problem of learning interpretable machine learning models from uncertain and missing information. We first develop a novel deep learning architecture, named RIDDLE (Rule InDuction with Deep LEarning), based on properties of possibility theory. With experimental results and comparison with FURIA, a state of the art method, RIDDLE is a promising rule induction algorithm for finding rules from data. We then formally investigate the learning task of identifying rules with confidence degree associated to them in the exact learning model. We formally define theoretical frameworks and show conditions that must hold to guarantee that a learning algorithm will identify the rules that hold in a domain. Finally, we develop an algorithm that learns rules with associated confidence values in the exact learning model. We also propose a technique to simulate queries in the exact learning model from data. Experiments show encouraging results to learn a set of rules that approximate rules encoded in data.Doktorgradsavhandlin

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    Privacy enhancing technologies : protocol verification, implementation and specification

    Get PDF
    In this thesis, we present novel methods for verifying, implementing and specifying protocols. In particular, we focus properties modeling data protection and the protection of privacy. In the first part of the thesis, the author introduces protocol verification and presents a model for verification that encompasses so-called Zero-Knowledge (ZK) proofs. These ZK proofs are a cryptographic primitive that is particularly suited for hiding information and hence serves the protection of privacy. The here presented model gives a list of criteria which allows the transfer of verification results from the model to the implementation if the criteria are met by the implementation. In particular, the criteria are less demanding than the ones of previous work regarding ZK proofs. The second part of the thesis contributes to the area of protocol implementations. Hereby, ZK proofs are used in order to improve multi-party computations. The third and last part of the thesis explains a novel approach for specifying data protection policies. Instead of relying on policies, this approach relies on actual legislation. The advantage of relying on legislation is that often a fair balancing is introduced which is typically not contained in regulations or policies.In dieser Arbeit werden neue Methoden zur Verifikation, Implementierung und Spezifikation im von Protokollen vorgestellt. Ein besonderer Fokus liegt dabei auf Datenschutz-Eigenschaften und dem Schutz der Privatsph¨are. Im ersten Teil dieser Arbeit geht der Author auf die Protokoll- Verifikation ein und stellt ein Modell zur Verifikation vor, dass sogenannte Zero-Knowledge (ZK) Beweise enth¨alt. Diese ZK Beweise sind ein kryptographisches primitiv, dass insbesondere zum Verstecken von Informationen geeignet ist und somit zum Schutz der Privatsph¨are dient. Das hier vorgestellte Modell gibt eine Liste von Kriterien, welche eine Implementierung der genutzten kryptographischen Primitive erf¨ullen muss, damit die verifikationen im Modell sich auf Implementierungen ¨ubertragen lassen. In Bezug auf ZK Beweise sind diese Kriterien sch¨acher als die vorangegangener Arbeiten. Der zweite Teil der Arbeit wendet sich der Implementierung von Protokollen zu. Hierbei werden dann ZK Beweise verwendet um sichere Mehrparteienberechnungen zu verbessern. Im dritten und letzten Teil der Arbeit wird eine neuartige Art der Spezifikation von Datenschutz-Richtlinien erl¨autert. Diese geht nicht von Richtlinien aus, sondern von der Rechtsprechung. Der Vorteil ist, dass in der Rechtsprechung konkrete Abw¨agungen getroffen werden, die Gesetze und Richtlinien nicht enthalten
    corecore