10,200 research outputs found

    Requirements for Information Extraction for Knowledge Management

    Get PDF
    Knowledge Management (KM) systems inherently suffer from the knowledge acquisition bottleneck - the difficulty of modeling and formalizing knowledge relevant for specific domains. A potential solution to this problem is Information Extraction (IE) technology. However, IE was originally developed for database population and there is a mismatch between what is required to successfully perform KM and what current IE technology provides. In this paper we begin to address this issue by outlining requirements for IE based KM

    Formalizing Size-Optimal Sorting Networks: Extracting a Certified Proof Checker

    Full text link
    Since the proof of the four color theorem in 1976, computer-generated proofs have become a reality in mathematics and computer science. During the last decade, we have seen formal proofs using verified proof assistants being used to verify the validity of such proofs. In this paper, we describe a formalized theory of size-optimal sorting networks. From this formalization we extract a certified checker that successfully verifies computer-generated proofs of optimality on up to 8 inputs. The checker relies on an untrusted oracle to shortcut the search for witnesses on more than 1.6 million NP-complete subproblems.Comment: IMADA-preprint-c

    Optimizing a Certified Proof Checker for a Large-Scale Computer-Generated Proof

    Full text link
    In recent work, we formalized the theory of optimal-size sorting networks with the goal of extracting a verified checker for the large-scale computer-generated proof that 25 comparisons are optimal when sorting 9 inputs, which required more than a decade of CPU time and produced 27 GB of proof witnesses. The checker uses an untrusted oracle based on these witnesses and is able to verify the smaller case of 8 inputs within a couple of days, but it did not scale to the full proof for 9 inputs. In this paper, we describe several non-trivial optimizations of the algorithm in the checker, obtained by appropriately changing the formalization and capitalizing on the symbiosis with an adequate implementation of the oracle. We provide experimental evidence of orders of magnitude improvements to both runtime and memory footprint for 8 inputs, and actually manage to check the full proof for 9 inputs.Comment: IMADA-preprint-c

    Beyond Logic Programming for Legal Reasoning

    Full text link
    Logic programming has long being advocated for legal reasoning, and several approaches have been put forward relying upon explicit representation of the law in logic programming terms. In this position paper we focus on the PROLEG logic-programming-based framework for formalizing and reasoning with Japanese presupposed ultimate fact theory. Specifically, we examine challenges and opportunities in leveraging deep learning techniques for improving legal reasoning using PROLEG identifying four distinct options ranging from enhancing fact extraction using deep learning to end-to-end solutions for reasoning with textual legal descriptions. We assess advantages and limitations of each option, considering their technical feasibility, interpretability, and alignment with the needs of legal practitioners and decision-makers. We believe that our analysis can serve as a guideline for developers aiming to build effective decision-support systems for the legal domain, while fostering a deeper understanding of challenges and potential advancements by neuro-symbolic approaches in legal applications.Comment: Workshop on Logic Programming and Legal Reasoning, @ICLP 202

    Ontology-based Information Extraction with SOBA

    Get PDF
    In this paper we describe SOBA, a sub-component of the SmartWeb multi-modal dialog system. SOBA is a component for ontologybased information extraction from soccer web pages for automatic population of a knowledge base that can be used for domainspecific question answering. SOBA realizes a tight connection between the ontology, knowledge base and the information extraction component. The originality of SOBA is in the fact that it extracts information from heterogeneous sources such as tabular structures, text and image captions in a semantically integrated way. In particular, it stores extracted information in a knowledge base, and in turn uses the knowledge base to interpret and link newly extracted information with respect to already existing entities

    Formal study of plane Delaunay triangulation

    Get PDF
    This article presents the formal proof of correctness for a plane Delaunay triangulation algorithm. It consists in repeating a sequence of edge flippings from an initial triangulation until the Delaunay property is achieved. To describe triangulations, we rely on a combinatorial hypermap specification framework we have been developing for years. We embed hypermaps in the plane by attaching coordinates to elements in a consistent way. We then describe what are legal and illegal Delaunay edges and a flipping operation which we show preserves hypermap, triangulation, and embedding invariants. To prove the termination of the algorithm, we use a generic approach expressing that any non-cyclic relation is well-founded when working on a finite set

    Extending the Carrel system to mediate in the organ and tissue allocation processes: a first approach

    Get PDF
    In this paper we extend the formalization of Carrel, a virtual organization for the procurement of tissues for transplantation purposes, in order to model also the procurement of human organs for transplants. We will focus in the organ allocation process to show how it can be formalized with the ISLANDER formalism. Also we present a first mechanism to federate the institution in several geographically-distributed platforms.Postprint (published version
    • …
    corecore