43 research outputs found

    Reasoning about complex agent knowledge - Ontologies, Uncertainty, rules and beyond

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    BigDipper: A hyperscale BFT system with short term censorship resistance

    Full text link
    Byzantine-fault-tolerant (BFT) protocols underlie a variety of decentralized applications including payments, auctions, data feed oracles, and decentralized social networks. In most leader-based BFT protocols, an important property that has been missing is the censorship resistance of transaction in the short term. The protocol should provide inclusion guarantees in the next block height even if the current and future leaders have the intent of censoring. In this paper, we present a BFT system, BigDipper, that achieves censorship resistance while providing fast confirmation for clients and hyperscale throughput. The core idea is to decentralize inclusion of transactions by allowing every BFT replica to create their own mini-block, and then enforcing the leader on their inclusions. To achieve this, BigDipper creates a modular system made of three components. First, we provide a transaction broadcast protocol used by clients as an interface to achieve a spectrum of probabilistic inclusion guarantees. Afterwards, a distribution of BFT replicas will receive the client's transactions and prepare mini-blocks to send to the data availability (DA) component. The DA component characterizes the censorship resistant properties of the whole system. We design three censorship resistant DA (DA-CR) protocols with distinct properties captured by three parameters and demonstrate their trade-offs. The third component interleaves the DA-CR protocols into the consensus path of leader based BFT protocols, it enforces the leader to include all the data from the DA-CR into the BFT block. We demonstrate an integration with a two-phase Hotstuff-2 BFT protocol with minimal changes. BigDipper is a modular system that can switch the consensus to other leader based BFT protocol including Tendermint

    Integration of Logic and Probability in Terminological and Inductive Reasoning

    Get PDF
    This thesis deals with Statistical Relational Learning (SRL), a research area combining principles and ideas from three important subfields of Artificial Intelligence: machine learn- ing, knowledge representation and reasoning on uncertainty. Machine learning is the study of systems that improve their behavior over time with experience; the learning process typi- cally involves a search through various generalizations of the examples, in order to discover regularities or classification rules. A wide variety of machine learning techniques have been developed in the past fifty years, most of which used propositional logic as a (limited) represen- tation language. Recently, more expressive knowledge representations have been considered, to cope with a variable number of entities as well as the relationships that hold amongst them. These representations are mostly based on logic that, however, has limitations when reason- ing on uncertain domains. These limitations have been lifted allowing a multitude of different formalisms combining probabilistic reasoning with logics, databases or logic programming, where probability theory provides a formal basis for reasoning on uncertainty. In this thesis we consider in particular the proposals for integrating probability in Logic Programming, since the resulting probabilistic logic programming languages present very in- teresting computational properties. In Probabilistic Logic Programming, the so-called "dis- tribution semantics" has gained a wide popularity. This semantics was introduced for the PRISM language (1995) but is shared by many other languages: Independent Choice Logic, Stochastic Logic Programs, CP-logic, ProbLog and Logic Programs with Annotated Disjunc- tions (LPADs). A program in one of these languages defines a probability distribution over normal logic programs called worlds. This distribution is then extended to queries and the probability of a query is obtained by marginalizing the joint distribution of the query and the programs. The languages following the distribution semantics differ in the way they define the distribution over logic programs. The first part of this dissertation presents techniques for learning probabilistic logic pro- grams under the distribution semantics. Two problems are considered: parameter learning and structure learning, that is, the problems of inferring values for the parameters or both the structure and the parameters of the program from data. This work contributes an algorithm for parameter learning, EMBLEM, and two algorithms for structure learning (SLIPCASE and SLIPCOVER) of probabilistic logic programs (in particular LPADs). EMBLEM is based on the Expectation Maximization approach and computes the expectations directly on the Binary De- cision Diagrams that are built for inference. SLIPCASE performs a beam search in the space of LPADs while SLIPCOVER performs a beam search in the space of probabilistic clauses and a greedy search in the space of LPADs, improving SLIPCASE performance. All learning approaches have been evaluated in several relational real-world domains. The second part of the thesis concerns the field of Probabilistic Description Logics, where we consider a logical framework suitable for the Semantic Web. Description Logics (DL) are a family of formalisms for representing knowledge. Research in the field of knowledge repre- sentation and reasoning is usually focused on methods for providing high-level descriptions of the world that can be effectively used to build intelligent applications. Description Logics have been especially effective as the representation language for for- mal ontologies. Ontologies model a domain with the definition of concepts and their properties and relations. Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, etc. They should also allow to ask questions about the concepts and in- stances described, through inference procedures. Recently, the issue of representing uncertain information in these domains has led to probabilistic extensions of DLs. The contribution of this dissertation is twofold: (1) a new semantics for the Description Logic SHOIN(D) , based on the distribution semantics for probabilistic logic programs, which embeds probability; (2) a probabilistic reasoner for computing the probability of queries from uncertain knowledge bases following this semantics. The explanations of queries are encoded in Binary Decision Diagrams, with the same technique employed in the learning systems de- veloped for LPADs. This approach has been evaluated on a real-world probabilistic ontology

    The Fourth International VLDB Workshop on Management of Uncertain Data

    Get PDF

    Modelling language for biology with applications

    Get PDF
    Understanding the links between biological processes at multiple scales, from molecular regulation to populations and evolution along with their interactions with the environment, is a major challenge in understanding life. Apart from understanding this is also becoming important in attempts to engineer traits, for example in crops, starting from genetics or from genomes and at different environmental conditions (genotype x environment → trait). As systems become more complex relying on intuition alone is not enough and formal modelling becomes necessary for integrating data across different processes and allowing us to test hypotheses. The more complex the systems become, however, the harder the modelling process becomes and the harder the models become to read and write. In particular intuitive formalisms like Chemical Reaction Networks are not powerful enough to express ideas at higher levels, for example dynamic environments, dynamic state spaces, and abstraction relations between different parts of the model. Other formalisms are more powerful (for example general purpose programming languages) but they lack the readability of more domain specific approaches. The first contribution of this thesis is a modelling language with stochastic semantics, Chromar, that extends the visually intuitive formalisms of reactions, in which simple objects, called agents, are extended with attributes. Dynamics are given as stochastic rules that can operate on the level of agents (removing/adding) or at the level of attributes (updating their values). Chromar further allows the seamless integration of time and state functions with the normal set of expressions – crucial in multi-scale plant models for describing the changing environment and abstractions between scales. This leads to models that are both formal enough for simulations and easy to read and write. The second contribution of this thesis is a whole-life-cycle multi-model of the growth and reproduction of Arabidopsis Thaliana, FM-life, expressed in a declarative way in Chromar. It combines phenology models from ecology to time developmental processes and physical development, which allows to scale to the population and address ecological questions at different genotype x environment scenarios. This is a step in the path for mechanistic links between genotype x environment and higher-level crop traits. Finally, I show a way of using optimal control techniques to engineer traits of plants by controlling their growth environmental conditions. In particular we explore (i) a direct problem where the control is temperature – assuming homogeneous growth conditions and (ii) indirect problem where the control is the position of the plants – assuming inhomogeneous growth conditions

    Automated Reasoning

    Get PDF
    This volume, LNAI 13385, constitutes the refereed proceedings of the 11th International Joint Conference on Automated Reasoning, IJCAR 2022, held in Haifa, Israel, in August 2022. The 32 full research papers and 9 short papers presented together with two invited talks were carefully reviewed and selected from 85 submissions. The papers focus on the following topics: Satisfiability, SMT Solving,Arithmetic; Calculi and Orderings; Knowledge Representation and Jutsification; Choices, Invariance, Substitutions and Formalization; Modal Logics; Proofs System and Proofs Search; Evolution, Termination and Decision Prolems. This is an open access book

    28th International Symposium on Temporal Representation and Reasoning (TIME 2021)

    Get PDF
    The 28th International Symposium on Temporal Representation and Reasoning (TIME 2021) was planned to take place in Klagenfurt, Austria, but had to move to an online conference due to the insecurities and restrictions caused by the pandemic. Since its frst edition in 1994, TIME Symposium is quite unique in the panorama of the scientifc conferences as its main goal is to bring together researchers from distinct research areas involving the management and representation of temporal data as well as the reasoning about temporal aspects of information. Moreover, TIME Symposium aims to bridge theoretical and applied research, as well as to serve as an interdisciplinary forum for exchange among researchers from the areas of artifcial intelligence, database management, logic and verifcation, and beyond
    corecore