61,739 research outputs found

    Search for Program Structure

    Get PDF
    The community of programming language research loves the Curry-Howard correspondence between proofs and programs. Cut-elimination as computation, theorems for free, \u27call/cc\u27 as excluded middle, dependently typed languages as proof assistants, etc. Yet we have, for all these years, missed an obvious observation: "the structure of programs corresponds to the structure of proof search". For pure programs and intuitionistic logic, more is known about the latter than the former. We think we know what programs are, but logicians know better! To motivate the study of proof search for program structure, we retrace recent research on applying focusing to study the canonical structure of simply-typed lambda-terms. We then motivate the open problem of extending canonical forms to support richer type systems, such as polymorphism, by discussing a few enticing applications of more canonical program representations

    Collection analysis for Horn clause programs

    Get PDF
    We consider approximating data structures with collections of the items that they contain. For examples, lists, binary trees, tuples, etc, can be approximated by sets or multisets of the items within them. Such approximations can be used to provide partial correctness properties of logic programs. For example, one might wish to specify than whenever the atom sort(t,s)sort(t,s) is proved then the two lists tt and ss contain the same multiset of items (that is, ss is a permutation of tt). If sorting removes duplicates, then one would like to infer that the sets of items underlying tt and ss are the same. Such results could be useful to have if they can be determined statically and automatically. We present a scheme by which such collection analysis can be structured and automated. Central to this scheme is the use of linear logic as a omputational logic underlying the logic of Horn clauses

    Meta-F*: Proof Automation with SMT, Tactics, and Metaprograms

    Full text link
    We introduce Meta-F*, a tactics and metaprogramming framework for the F* program verifier. The main novelty of Meta-F* is allowing the use of tactics and metaprogramming to discharge assertions not solvable by SMT, or to just simplify them into well-behaved SMT fragments. Plus, Meta-F* can be used to generate verified code automatically. Meta-F* is implemented as an F* effect, which, given the powerful effect system of F*, heavily increases code reuse and even enables the lightweight verification of metaprograms. Metaprograms can be either interpreted, or compiled to efficient native code that can be dynamically loaded into the F* type-checker and can interoperate with interpreted code. Evaluation on realistic case studies shows that Meta-F* provides substantial gains in proof development, efficiency, and robustness.Comment: Full version of ESOP'19 pape

    Dialectica Interpretation with Marked Counterexamples

    Full text link
    Goedel's functional "Dialectica" interpretation can be used to extract functional programs from non-constructive proofs in arithmetic by employing two sorts of higher-order witnessing terms: positive realisers and negative counterexamples. In the original interpretation decidability of atoms is required to compute the correct counterexample from a set of candidates. When combined with recursion, this choice needs to be made for every step in the extracted program, however, in some special cases the decision on negative witnesses can be calculated only once. We present a variant of the interpretation in which the time complexity of extracted programs can be improved by marking the chosen witness and thus avoiding recomputation. The achieved effect is similar to using an abortive control operator to interpret computational content of non-constructive principles.Comment: In Proceedings CL&C 2010, arXiv:1101.520

    Proof Complexity of Systems of (Non-Deterministic) Decision Trees and Branching Programs

    Get PDF
    This paper studies propositional proof systems in which lines are sequents of decision trees or branching programs, deterministic or non-deterministic. Decision trees (DTs) are represented by a natural term syntax, inducing the system LDT, and non-determinism is modelled by including disjunction, ?, as primitive (system LNDT). Branching programs generalise DTs to dag-like structures and are duly handled by extension variables in our setting, as is common in proof complexity (systems eLDT and eLNDT). Deterministic and non-deterministic branching programs are natural nonuniform analogues of log-space (L) and nondeterministic log-space (NL), respectively. Thus eLDT and eLNDT serve as natural systems of reasoning corresponding to L and NL, respectively. The main results of the paper are simulation and non-simulation results for tree-like and dag-like proofs in LDT, LNDT, eLDT and eLNDT. We also compare them with Frege systems, constant-depth Frege systems and extended Frege systems

    Perspectives for proof unwinding by programming languages techniques

    Get PDF
    In this chapter, we propose some future directions of work, potentially beneficial to Mathematics and its foundations, based on the recent import of methodology from the theory of programming languages into proof theory. This scientific essay, written for the audience of proof theorists as well as the working mathematician, is not a survey of the field, but rather a personal view of the author who hopes that it may inspire future and fellow researchers

    Hipster: Integrating Theory Exploration in a Proof Assistant

    Full text link
    This paper describes Hipster, a system integrating theory exploration with the proof assistant Isabelle/HOL. Theory exploration is a technique for automatically discovering new interesting lemmas in a given theory development. Hipster can be used in two main modes. The first is exploratory mode, used for automatically generating basic lemmas about a given set of datatypes and functions in a new theory development. The second is proof mode, used in a particular proof attempt, trying to discover the missing lemmas which would allow the current goal to be proved. Hipster's proof mode complements and boosts existing proof automation techniques that rely on automatically selecting existing lemmas, by inventing new lemmas that need induction to be proved. We show example uses of both modes

    Modular, Fully-abstract Compilation by Approximate Back-translation

    Full text link
    A compiler is fully-abstract if the compilation from source language programs to target language programs reflects and preserves behavioural equivalence. Such compilers have important security benefits, as they limit the power of an attacker interacting with the program in the target language to that of an attacker interacting with the program in the source language. Proving compiler full-abstraction is, however, rather complicated. A common proof technique is based on the back-translation of target-level program contexts to behaviourally-equivalent source-level contexts. However, constructing such a back- translation is problematic when the source language is not strong enough to embed an encoding of the target language. For instance, when compiling from STLC to ULC, the lack of recursive types in the former prevents such a back-translation. We propose a general and elegant solution for this problem. The key insight is that it suffices to construct an approximate back-translation. The approximation is only accurate up to a certain number of steps and conservative beyond that, in the sense that the context generated by the back-translation may diverge when the original would not, but not vice versa. Based on this insight, we describe a general technique for proving compiler full-abstraction and demonstrate it on a compiler from STLC to ULC. The proof uses asymmetric cross-language logical relations and makes innovative use of step-indexing to express the relation between a context and its approximate back-translation. The proof extends easily to common compiler patterns such as modular compilation and it, to the best of our knowledge, it is the first compiler full abstraction proof to have been fully mechanised in Coq. We believe this proof technique can scale to challenging settings and enable simpler, more scalable proofs of compiler full-abstraction
    • …
    corecore