1,587 research outputs found

    Perspectives in deductive databases

    Get PDF
    AbstractI discuss my experiences, some of the work that I have done, and related work that influenced me, concerning deductive databases, over the last 30 years. I divide this time period into three roughly equal parts: 1957–1968, 1969–1978, 1979–present. For the first I describe how my interest started in deductive databases in 1957, at a time when the field of databases did not even exist. I describe work in the beginning years, leading to the start of deductive databases about 1968 with the work of Cordell Green and Bertram Raphael. The second period saw a great deal of work in theorem providing as well as the introduction of logic programming. The existence and importance of deductive databases as a formal and viable discipline received its impetus at a workshop held in Toulouse, France, in 1977, which culminated in the book Logic and Data Bases. The relationship of deductive databases and logic programming was recognized at that time. During the third period we have seen formal theories of databases come about as an outgrowth of that work, and the recognition that artificial intelligence and deductive databases are closely related, at least through the so-called expert database systems. I expect that the relationships between techniques from formal logic, databases, logic programming, and artificial intelligence will continue to be explored and the field of deductive databases will become a more prominent area of computer science in coming years

    Goal driven theorem proving using conceptual graphs and Peirce logic

    Get PDF
    The thesis describes a rational reconstruction of Sowa's theory of Conceptual Graphs. The reconstruction produces a theory with a firmer logical foundation than was previously the case and which is suitable for computation whilst retaining the expressiveness of the original theory. Also, several areas of incompleteness are addressed. These mainly concern the scope of operations on conceptual graphs of different types but include extensions for logics of higher orders than first order. An important innovation is the placing of negation onto a sound representational basis. A comparison of theorem proving techniques is made from which the principles of theorem proving in Peirce logic are identified. As a result, a set of derived inference rules, suitable for a goal driven approach to theorem proving, is developed from Peirce's beta rules. These derived rules, the first of their kind for Peirce logic and conceptual graphs, allow the development of a novel theorem proving approach which has some similarities to a combined semantic tableau and resolution methodology. With this methodology it is shown that a logically complete yet tractable system is possible. An important result is the identification of domain independent heuristics which follow directly from the methodology. In addition to the theorem prover, an efficient system for the detection of selectional constraint violations is developed. The proof techniques are used to build a working knowledge base system in Prolog which can accept arbitrary statements represented by conceptual graphs and test their semantic and logical consistency against a dynamic knowledge base. The same proof techniques are used to find solutions to arbitrary queries. Since the system is logically complete it can maintain the integrity of its knowledge base and answer queries in a fully automated manner. Thus the system is completely declarative and does not require any programming whatever by a user with the result that all interaction with a user is conversational. Finally, the system is compared with other theorem proving systems which are based upon Conceptual Graphs and conclusions about the effectiveness of the methodology are drawn

    Infinite Probabilistic Databases

    Get PDF
    Probabilistic databases (PDBs) are used to model uncertainty in data in a quantitative way. In the standard formal framework, PDBs are finite probability spaces over relational database instances. It has been argued convincingly that this is not compatible with an open-world semantics (Ceylan et al., KR 2016) and with application scenarios that are modeled by continuous probability distributions (Dalvi et al., CACM 2009). We recently introduced a model of PDBs as infinite probability spaces that addresses these issues (Grohe and Lindner, PODS 2019). While that work was mainly concerned with countably infinite probability spaces, our focus here is on uncountable spaces. Such an extension is necessary to model typical continuous probability distributions that appear in many applications. However, an extension beyond countable probability spaces raises nontrivial foundational issues concerned with the measurability of events and queries and ultimately with the question whether queries have a well-defined semantics. It turns out that so-called finite point processes are the appropriate model from probability theory for dealing with probabilistic databases. This model allows us to construct suitable (uncountable) probability spaces of database instances in a systematic way. Our main technical results are measurability statements for relational algebra queries as well as aggregate queries and Datalog queries

    SNOMED CT standard ontology based on the ontology for general medical science

    Get PDF
    Background: Systematized Nomenclature of Medicine—Clinical Terms (SNOMED CT, hereafter abbreviated SCT) is acomprehensive medical terminology used for standardizing the storage, retrieval, and exchange of electronic healthdata. Some efforts have been made to capture the contents of SCT as Web Ontology Language (OWL), but theseefforts have been hampered by the size and complexity of SCT. Method: Our proposal here is to develop an upper-level ontology and to use it as the basis for defining the termsin SCT in a way that will support quality assurance of SCT, for example, by allowing consistency checks ofdefinitions and the identification and elimination of redundancies in the SCT vocabulary. Our proposed upper-levelSCT ontology (SCTO) is based on the Ontology for General Medical Science (OGMS). Results: The SCTO is implemented in OWL 2, to support automatic inference and consistency checking. Theapproach will allow integration of SCT data with data annotated using Open Biomedical Ontologies (OBO) Foundryontologies, since the use of OGMS will ensure consistency with the Basic Formal Ontology, which is the top-levelontology of the OBO Foundry. Currently, the SCTO contains 304 classes, 28 properties, 2400 axioms, and 1555annotations. It is publicly available through the bioportal athttp://bioportal.bioontology.org/ontologies/SCTO/. Conclusion: The resulting ontology can enhance the semantics of clinical decision support systems and semanticinteroperability among distributed electronic health records. In addition, the populated ontology can be used forthe automation of mobile health applications

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Believe It or Not: Adding Belief Annotations to Databases

    Full text link
    We propose a database model that allows users to annotate data with belief statements. Our motivation comes from scientific database applications where a community of users is working together to assemble, revise, and curate a shared data repository. As the community accumulates knowledge and the database content evolves over time, it may contain conflicting information and members can disagree on the information it should store. For example, Alice may believe that a tuple should be in the database, whereas Bob disagrees. He may also insert the reason why he thinks Alice believes the tuple should be in the database, and explain what he thinks the correct tuple should be instead. We propose a formal model for Belief Databases that interprets users' annotations as belief statements. These annotations can refer both to the base data and to other annotations. We give a formal semantics based on a fragment of multi-agent epistemic logic and define a query language over belief databases. We then prove a key technical result, stating that every belief database can be encoded as a canonical Kripke structure. We use this structure to describe a relational representation of belief databases, and give an algorithm for translating queries over the belief database into standard relational queries. Finally, we report early experimental results with our prototype implementation on synthetic data.Comment: 17 pages, 10 figure

    Query Answering in Probabilistic Data and Knowledge Bases

    Get PDF
    Probabilistic data and knowledge bases are becoming increasingly important in academia and industry. They are continuously extended with new data, powered by modern information extraction tools that associate probabilities with knowledge base facts. The state of the art to store and process such data is founded on probabilistic database systems, which are widely and successfully employed. Beyond all the success stories, however, such systems still lack the fundamental machinery to convey some of the valuable knowledge hidden in them to the end user, which limits their potential applications in practice. In particular, in their classical form, such systems are typically based on strong, unrealistic limitations, such as the closed-world assumption, the closed-domain assumption, the tuple-independence assumption, and the lack of commonsense knowledge. These limitations do not only lead to unwanted consequences, but also put such systems on weak footing in important tasks, querying answering being a very central one. In this thesis, we enhance probabilistic data and knowledge bases with more realistic data models, thereby allowing for better means for querying them. Building on the long endeavor of unifying logic and probability, we develop different rigorous semantics for probabilistic data and knowledge bases, analyze their computational properties and identify sources of (in)tractability and design practical scalable query answering algorithms whenever possible. To achieve this, the current work brings together some recent paradigms from logics, probabilistic inference, and database theory
    • …
    corecore