44 research outputs found

    Degree of Satisfiability in Heyting Algebras

    Full text link
    Given a finite structure MM and property pp, it is a natural to study the degree of satisfiability of pp in MM; i.e. to ask: what is the probability that uniformly randomly chosen elements in MM satisfy pp? In group theory, a well-known result of Gustafson states that the equation xy=yxxy=yx has a finite satisfiability gap: its degree of satisfiability is either 11 (in Abelian groups) or no larger than 58\frac{5}{8}. Degree of satisfiability has proven useful in the study of (finite and infinite) group-like and ring-like algebraic structures, but finite satisfiability gap questions have not been considered in lattice-like, order-theoretic settings yet. Here we investigate degree of satisfiability questions in the context of Heyting algebras and intuitionistic logic. We classify all equations in one free variable with respect to finite satisfiability gap, and determine which common principles of classical logic in multiple free variables have finite satisfiability gap. In particular we prove that, in a finite non-Boolean Heyting algebra, the probability that a randomly chosen element satisfies x¬x=x \vee \neg x = \top is no larger than 23\frac{2}{3}. Finally, we generalize our results to infinite Heyting algebras, and present their applications to point-set topology, black-box algebras, and the philosophy of logic.Comment: 22 pages, 2 figures. To appear in Journal of Symbolic Logic. Changes: Final version, w/ streamlined proofs and minor changes throughou

    Complexity Hierarchies Beyond Elementary

    Full text link
    We introduce a hierarchy of fast-growing complexity classes and show its suitability for completeness statements of many non elementary problems. This hierarchy allows the classification of many decision problems with a non-elementary complexity, which occur naturally in logic, combinatorics, formal languages, verification, etc., with complexities ranging from simple towers of exponentials to Ackermannian and beyond.Comment: Version 3 is the published version in TOCT 8(1:3), 2016. I will keep updating the catalogue of problems from Section 6 in future revision

    LinGraph: a graph-based automated planner for concurrent task planning based on linear logic

    Get PDF
    In this paper, we introduce an automated planner for deterministic, concurrent domains, formulated as a graph-based theorem prover for a propositional fragment of intuitionistic linear logic, relying on the previously established connection between intuitionistic linear logic and planning problems. The new graph-based theorem prover we introduce improves planning performance by reducing proof permutations that are irrelevant to planning problems particularly in the presence of large numbers of objects and agents with identical properties (e.g. robots within a swarm, or parts in a large factory). We first present our graph-based automated planner, the Linear Logic Graph Planner (LinGraph). Subsequently we illustrate its application for planning within a concurrent manufacturing domain and provide comparisons with four existing automated planners, BlackBox, Symba-2, Metis and the Temporal Fast Downward (TFD), covering a wide range of state-of-the-art automated planning techniques and implementations. We show that even though LinGraph does not rely on any heuristics, it still outperforms these systems for concurrent domains with large numbers of identical objects and agents. These gains persist even when existing methods on symmetry reduction and numerical fluents are used, with LinGraph capable of handling problems with thousands of objects. Following these results, we also show that plan construction with LinGraph is equivalent to multiset rewriting systems, formally relating LinGraph to intuitionistic linear logic. © 2017, Springer Science+Business Media New York

    Proof-relevant resolution : the foundations of constructive proof automation

    Get PDF
    Dependent type theory is an expressive programming language. This language allows to write programs that carry proofs of their properties. This in turn gives high confidence in such programs, making the software trustworthy. Yet, the trustworthiness comes for a price: type inference involves an increasing number of proof obligations. Automation of this process becomes necessary for any system with dependent types that aims to be usable in practice. At the same time, implementation of automation in a verified manner is prohibitively complex. Sometimes, external solvers are used to aid the automation. These solvers may be based on classical logic and may not be themselves verified, thus compromising the guarantees provided by constructive nature of type theory. In this thesis, we explore the idea of proof relevant resolution that allows automation of type inference in type theory in a verifiable and constructive manner, hence to restore the confidence in programs and the trustworthiness of software. Technical content of this thesis is threefold. First, we propose a novel framework for proof-relevant resolution. We take two constructive logics, Horn-clause and hereditary Harrop formulae logics as a starting point. We formulate the standard big-step operational semantics of these logics. We expose their Curry-Howard nature by treating formulae of these logics as types and proofs as terms thus developing a theory of proof-relevant resolution. We develop small-step operational semantics of proof-relevant resolution and prove it sound with respect to the big-step operational semantics. Secondly, we demonstrate our approach on an example of type inference in Logical Framework (LF). We translate a type-inference problem in LF into resolution in proof-relevant Horn-clause logic. Such resolution provides, besides an answer substitution to logic variables, a proof term that captures the resolution tree. We interpret the proof term as a derivation of well-formedness judgement of the object in the original problem. This allows for a straightforward implementation of type checking of the resolved solution since type checking is reduced to verifying the derivation captured by the proof term. The theoretical development is substantiated by an implementation. Finally, we demonstrate that our approach allows to reason about semantic properties of code. Type class resolution has been well-known to be a proof-relevant fragment of Horn-clause logic, and recently its coinductive extensions were introduced. In this thesis, we show that all of these extensions amalgamate with the theoretical framework we introduce. Our novel result here is exposing that the coinductive extensions are actually based on hereditary Harrop logic, rather than Horn-clause logic. We establish a number of soundness and completeness results for them. We also discuss soundness of program transformation that are allowed by proof-relevant presentation of type class resolution

    Assertion-based slicing and slice graphs

    Get PDF
    This paper revisits the idea of slicing programs based on their axiomatic semantics, rather than using criteria based on control/data dependencies. We show how the forward propagation of preconditions and the backward propagation of postconditions can be combined in a new slicing algorithm that is more precise than the existing specification-based algorithms. The algorithm is based on (i) a precise test for removable statements, and (ii) the construction of a \emph{slice graph}, a program control flow graph extended with semantic labels and additional edges that ``short-circuit'' removable commands. It improves on previous approaches in two aspects: it does not fail to identify removable commands; and it produces the smallest possible slice that can be obtained (in a sense that will be made precise). Iteration is handled through the use of loop invariants and variants to ensure termination. The paper also discusses in detail applications of these forms of slicing, including the elimination of (conditionally) unreachable and dead code, and compares them to other related notions.Fundação para a Ciência e a Tecnologia (FCT

    Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis

    Get PDF
    Description Logic (abbrv. DL) belongs to the field of knowledge representation and reasoning. DL researchers have developed a large family of logic-based languages, so-called description logics (abbrv. DLs). These logics allow their users to explicitly represent knowledge as ontologies, which are finite sets of (human- and machine-readable) axioms, and provide them with automated inference services to derive implicit knowledge. The landscape of decidability and computational complexity of common reasoning tasks for various description logics has been explored in large parts: there is always a trade-off between expressibility and reasoning costs. It is therefore not surprising that DLs are nowadays applied in a large variety of domains: agriculture, astronomy, biology, defense, education, energy management, geography, geoscience, medicine, oceanography, and oil and gas. Furthermore, the most notable success of DLs is that these constitute the logical underpinning of the Web Ontology Language (abbrv. OWL) in the Semantic Web. Formal Concept Analysis (abbrv. FCA) is a subfield of lattice theory that allows to analyze data-sets that can be represented as formal contexts. Put simply, such a formal context binds a set of objects to a set of attributes by specifying which objects have which attributes. There are two major techniques that can be applied in various ways for purposes of conceptual clustering, data mining, machine learning, knowledge management, knowledge visualization, etc. On the one hand, it is possible to describe the hierarchical structure of such a data-set in form of a formal concept lattice. On the other hand, the theory of implications (dependencies between attributes) valid in a given formal context can be axiomatized in a sound and complete manner by the so-called canonical base, which furthermore contains a minimal number of implications w.r.t. the properties of soundness and completeness. In spite of the different notions used in FCA and in DLs, there has been a very fruitful interaction between these two research areas. My thesis continues this line of research and, more specifically, I will describe how methods from FCA can be used to support the automatic construction and extension of DL ontologies from data

    Abstract Contract Synthesis and Verification in the Symbolic K Framework

    Full text link
    [EN] In this article, we propose a symbolic technique that can be used for automatically inferring software contracts from programs that are written in a non-trivial fragment of C, called KERNELC, that supports pointer-based structures and heap manipulation. Starting from the semantic definition of KERNELC in the K semantic framework, we enrich the symbolic execution facilities recently provided by K with novel capabilities for contract synthesis that are based on abstract subsumption. Roughly speaking, we define an abstract symbolic technique that axiomatically explains the execution of any (modifier) C function by using other (observer) routines in the same program. We implemented our technique in the automated tool KINDSPEC 2.1, which generates logical axioms that express pre- and post-condition assertions which define the precise input/output behavior of the C routines. Thanks to the integrated support for symbolic execution and deductive verification provided by K, some synthesized axioms that cannot be guaranteed to be correct by construction due to abstraction can finally be verified in our setting with little effort.This work has been partially supported by the EC H2020-EU grant agreement No. 952215 (TAILOR), the EU (FEDER) and the Spanish MCIU under grant RTI2018-094403-B-C32, by Generalitat Valenciana under grant PROMETEO/2019/098.Alpuente Frasnedo, M.; Pardo, D.; Villanueva, A. (2020). Abstract Contract Synthesis and Verification in the Symbolic K Framework. Fundamenta Informaticae. 177(3-4):235-273. https://doi.org/10.3233/FI-2020-1989S2352731773-
    corecore