2,010 research outputs found

    On the strength of proof-irrelevant type theories

    Full text link
    We present a type theory with some proof-irrelevance built into the conversion rule. We argue that this feature is useful when type theory is used as the logical formalism underlying a theorem prover. We also show a close relation with the subset types of the theory of PVS. We show that in these theories, because of the additional extentionality, the axiom of choice implies the decidability of equality, that is, almost classical logic. Finally we describe a simple set-theoretic semantics.Comment: 20 pages, Logical Methods in Computer Science, Long version of IJCAR 2006 pape

    Automated Certification of Implicit Induction Proofs

    Get PDF
    International audienceTheorem proving is crucial for the formal validation of properties about user specifications. With the help of the Coq proof assistant, we show how to certify properties about conditional specifications that are proved using automated proof techniques like those employed by the Spike prover, a rewrite-based implicit induction proof system. The certification methodology is based on a new representation of the implicit induction proofs for which the underlying induction principle is an instance of Noetherian induction governed by an induction ordering over equalities. We propose improvements of the certification process and show that the certification time is reasonable even for industrial-size applications. As a case study, we automatically prove and certify more than 40% of the lemmas needed for the validation of a conformance algorithm for the ABR protocol

    Verified compilation and optimization of floating-point kernels

    Get PDF
    When verifying safety-critical code on the level of source code, we trust the compiler to produce machine code that preserves the behavior of the source code. Trusting a verified compiler is easy. A rigorous machine-checked proof shows that the compiler correctly translates source code into machine code. Modern verified compilers (e.g. CompCert and CakeML) have rich input languages, but only rudimentary support for floating-point arithmetic. In fact, state-of-the-art verified compilers only implement and verify an inflexible one-to-one translation from floating-point source code to machine code. This translation completely ignores that floating-point arithmetic is actually a discrete representation of the continuous real numbers. This thesis presents two extensions improving floating-point arithmetic in CakeML. First, the thesis demonstrates verified compilation of elementary functions to floating-point code in: Dandelion, an automatic verifier for polynomial approximations of elementary functions; and libmGen, a proof-producing compiler relating floating-point machine code to the implemented real-numbered elementary function. Second, the thesis demonstrates verified optimization of floating-point code in: Icing, a floating-point language extending standard floating-point arithmetic with optimizations similar to those used by unverified compilers, like GCC and LLVM; and RealCake, an extension of CakeML with Icing into the first fully verified optimizing compiler for floating-point arithmetic.Bei der Verifizierung von sicherheitsrelevantem Quellcode vertrauen wir dem Compiler, dass er Maschinencode ausgibt, der sich wie der Quellcode verhält. Man kann ohne weiteres einem verifizierten Compiler vertrauen. Ein rigoroser maschinen-ü}berprüfter Beweis zeigt, dass der Compiler Quellcode in korrekten Maschinencode übersetzt. Moderne verifizierte Compiler (z.B. CompCert und CakeML) haben komplizierte Eingabesprachen, aber unterstützen Gleitkommaarithmetik nur rudimentär. De facto implementieren und verifizieren hochmoderne verifizierte Compiler für Gleitkommaarithmetik nur eine starre eins-zu-eins Übersetzung von Quell- zu Maschinencode. Diese Übersetzung ignoriert vollständig, dass Gleitkommaarithmetik eigentlich eine diskrete Repräsentation der kontinuierlichen reellen Zahlen ist. Diese Dissertation präsentiert zwei Erweiterungen die Gleitkommaarithmetik in CakeML verbessern. Zuerst demonstriert die Dissertation verifizierte Übersetzung von elementaren Funktionen in Gleitkommacode mit: Dandelion, einem automatischen Verifizierer für Polynomapproximierungen von elementaren Funktionen; und libmGen, einen Beweis-erzeugenden Compiler der Gleitkommacode in Relation mit der implementierten elementaren Funktion setzt. Dann demonstriert die Dissertation verifizierte Optimierung von Gleitkommacode mit: Icing, einer Gleitkommasprache die Gleitkommaarithmetik mit Optimierungen erweitert die ähnlich zu denen in unverifizierten Compilern, wie GCC und LLVM, sind; und RealCake, eine Erweiterung von CakeML mit Icing als der erste vollverifizierte Compiler für Gleitkommaarithmetik

    POPLMark reloaded: Mechanizing proofs by logical relations

    Get PDF
    We propose a new collection of benchmark problems in mechanizing the metatheory of programming languages, in order to compare and push the state of the art of proof assistants. In particular, we focus on proofs using logical relations (LRs) and propose establishing strong normalization of a simply typed calculus with a proof by Kripke-style LRs as a benchmark. We give a modern view of this well-understood problem by formulating our LR on well-typed terms. Using this case study, we share some of the lessons learned tackling this problem in different dependently typed proof environments. In particular, we consider the mechanization in Beluga, a proof environment that supports higher-order abstract syntax encodings and contrast it to the development and strategies used in general-purpose proof assistants such as Coq and Agda. The goal of this paper is to engage the community in discussions on what support in proof environments is needed to truly bring mechanized metatheory to the masses and engage said community in the crafting of future benchmarks

    SPIKE, an automatic theorem prover -- revisited

    Get PDF
    International audienceSPIKE, an induction-based theorem prover built to reason on conditional theories with equality, is one of the few formal tools able to perform automatically mutual and lazy induction. Designed at the beginning of 1990s, it has been successfully used in many non-trivial applications and served as a prototype for different proof experiments and extensions. The first paper introducing SPIKE is [14], published shortly after the tool was created. The goal of this paper is to highlight and bring together in one spot the major changes supported by SPIKE since then

    Meta-F*: Proof Automation with SMT, Tactics, and Metaprograms

    Full text link
    We introduce Meta-F*, a tactics and metaprogramming framework for the F* program verifier. The main novelty of Meta-F* is allowing the use of tactics and metaprogramming to discharge assertions not solvable by SMT, or to just simplify them into well-behaved SMT fragments. Plus, Meta-F* can be used to generate verified code automatically. Meta-F* is implemented as an F* effect, which, given the powerful effect system of F*, heavily increases code reuse and even enables the lightweight verification of metaprograms. Metaprograms can be either interpreted, or compiled to efficient native code that can be dynamically loaded into the F* type-checker and can interoperate with interpreted code. Evaluation on realistic case studies shows that Meta-F* provides substantial gains in proof development, efficiency, and robustness.Comment: Full version of ESOP'19 pape

    Certified Multi-Fidelity Zeroth-Order Optimization

    Full text link
    We consider the problem of multi-fidelity zeroth-order optimization, where one can evaluate a function ff at various approximation levels (of varying costs), and the goal is to optimize ff with the cheapest evaluations possible. In this paper, we study \emph{certified} algorithms, which are additionally required to output a data-driven upper bound on the optimization error. We first formalize the problem in terms of a min-max game between an algorithm and an evaluation environment. We then propose a certified variant of the MFDOO algorithm and derive a bound on its cost complexity for any Lipschitz function ff. We also prove an ff-dependent lower bound showing that this algorithm has a near-optimal cost complexity. We close the paper by addressing the special case of noisy (stochastic) evaluations as a direct example

    Récurrence noethérienne pour le raisonnement de premier ordre

    Get PDF
    National audienceLa récurrence nœthérienne est un des principes les plus généraux de raisonnement formel. Dans le cadre du raisonnement de premier ordre, nous présentons une classification de ses instances pouvant être partagées en instances basées sur des termes et des formules. Nous donnons un aperçu du raisonnement par récurrence nœthérienne basée sur des termes et sur des formules, puis nous établissons des relations entre eux. Enfin, nous présentons une méthodologie pour la certification du raisonnement basé sur des formules à l’aide de l’assistant de preuve Coq

    From LCF to Isabelle/HOL

    Get PDF
    Interactive theorem provers have developed dramatically over the past four decades, from primitive beginnings to today's powerful systems. Here, we focus on Isabelle/HOL and its distinctive strengths. They include automatic proof search, borrowing techniques from the world of first order theorem proving, but also the automatic search for counterexamples. They include a highly readable structured language of proofs and a unique interactive development environment for editing live proof documents. Everything rests on the foundation conceived by Robin Milner for Edinburgh LCF: a proof kernel, using abstract types to ensure soundness and eliminate the need to store proofs. Compared with the research prototypes of the 1970s, Isabelle is a practical and versatile tool. It is used by system designers, mathematicians and many others
    • …
    corecore