1,641 research outputs found

    Automatically Deriving Schematic Theorems for Dynamic Contexts

    Get PDF
    International audienceHypothetical judgments go hand-in-hand with higher-order abstract syntax for meta-theoretic reasoning. Such judgments have two kinds of assumptions: those that are statically known from the specification, and the dynamic assumptions that result from building derivations out of the specification clauses. These dynamic assumptions often have a simple regular structure of repetitions of blocks of related assumptions, with each block generally involving one or several variables and their properties, that are added to the context in a single backchaining step. Reflecting on this regular structure can let us derive a number of structural properties about the elements of the context. We present an extension of the Abella theorem prover, which is based on a simply typed intuitionistic reasoning logic supporting (co-)inductive definitions and generic quantification. Dynamic contexts are repre-sented in Abella using lists of formulas for the assumptions and quantifier nesting for the variables, together with an inductively defined context relation that specifies their structure. We add a new mechanism for defining particular kinds of regular context relations, called schemas, and tacticals to derive theorems from these schemas as needed. Importantly, our extension leaves the trusted kernel of Abella unchanged. We show that these tacticals can eliminate many commonly encountered kinds of administrative lemmas that would otherwise have to be proven manually, which is a common source of complaints from Abella users

    Automating the Proofs of Strengthening Lemmas in the Abella Proof Assistant

    Get PDF
    In logical reasoning, it is often the case that only some of a collection of assumptions are needed to reach a conclusion. A strengthening lemma is an assertion that a given conclusion is independent in this sense of a particular assumption. Strengthening lemmas underlie many useful techniques for simplifying proofs in automated and interactive theorem-provers. For example, they underlie a mechanism called subordination that is useful in determining that expressions of a particular type cannot contain objects of another type and in thereby reducing the number of cases to be considered in proving universally quantified statements. This thesis concerns the automation of the proofs of strengthening lemmas in a specification logic called the logic of hereditary Harrop formulas (HOHH). The Abella Proof Assistant embeds this logic in a way that allows it to prove properties of both the logic itself and of specifications written in it. Previous research has articulated a (conservative) algorithm for checking if a claimed strengthening lemma is, in fact, true. We provide here an implementation of this algorithm within the setting of Abella. Moreover, we show how to generate an actual proof of the strengthening lemma in Abella from the information computed by the algorithm; such a proof serves as a more trustworthy certificate of the correctness of the lemma than the algorithm itself. The results of this work have been incorporated into the Abella system in the form of a "tactic command" that can be invoked within the interactive theorem-prover and that will result in an elaboration of a proof of the lemma and its incorporation into the collection of proven facts about a given specification

    A Two-Level Logic Approach to Reasoning about Typed Specification Languages

    Get PDF
    International audienceThe two-level logic approach (2LL) to reasoning about computational specifications, as implemented by the Abella theorem prover, represents derivations of a specification language as an inductive definition in a reasoning logic. This approach has traditionally been formulated with the specification and reasoning logics having the same type system, and only the formulas being translated. However, requiring identical type systems limits the approach in two important ways: (1) every change in the specification language's type system requires a corresponding change in that of the reasoning logic, and (2) the same reasoning logic cannot be used with two specification languages at once if they have incompatible type systems. We propose a technique based on adequate encodings of the types and judgements of a typed specification language in terms of a simply typed higher-order logic program, which is then used for reasoning about the specification language in the usual 2LL. Moreover, a single specification logic implementation can be used as a basis for a number of other specification languages just by varying the encoding. We illustrate our technique with an implementation of the LF dependent type theory as a new specification language for Abella, co-existing with its current simply typed higher-order hereditary Harrop specification logic, without modifying the type system of its reasoning logic

    Formal verification of automotive embedded UML designs

    Get PDF
    Software applications are increasingly dominating safety critical domains. Safety critical domains are domains where the failure of any application could impact human lives. Software application safety has been overlooked for quite some time but more focus and attention is currently directed to this area due to the exponential growth of software embedded applications. Software systems have continuously faced challenges in managing complexity associated with functional growth, flexibility of systems so that they can be easily modified, scalability of solutions across several product lines, quality and reliability of systems, and finally the ability to detect defects early in design phases. AUTOSAR was established to develop open standards to address these challenges. ISO-26262, automotive functional safety standard, aims to ensure functional safety of automotive systems by providing requirements and processes to govern software lifecycle to ensure safety. Each functional system needs to be classified in terms of safety goals, risks and Automotive Safety Integrity Level (ASIL: A, B, C and D) with ASIL D denoting the most stringent safety level. As risk of the system increases, ASIL level increases and the standard mandates more stringent methods to ensure safety. ISO-26262 mandates that ASILs C and D classified systems utilize walkthrough, semi-formal verification, inspection, control flow analysis, data flow analysis, static code analysis and semantic code analysis techniques to verify software unit design and implementation. Ensuring software specification compliance via formal methods has remained an academic endeavor for quite some time. Several factors discourage formal methods adoption in the industry. One major factor is the complexity of using formal methods. Software specification compliance in automotive remains in the bulk heavily dependent on traceability matrix, human based reviews, and testing activities conducted on either actual production software level or simulation level. ISO26262 automotive safety standard recommends, although not strongly, using formal notations in automotive systems that exhibit high risk in case of failure yet the industry still heavily relies on semi-formal notations such as UML. The use of semi-formal notations makes specification compliance still heavily dependent on manual processes and testing efforts. In this research, we propose a framework where UML finite state machines are compiled into formal notations, specification requirements are mapped into formal model theorems and SAT/SMT solvers are utilized to validate implementation compliance to specification. The framework will allow semi-formal verification of AUTOSAR UML designs via an automated formal framework backbone. This semi-formal verification framework will allow automotive software to comply with ISO-26262 ASIL C and D unit design and implementation formal verification guideline. Semi-formal UML finite state machines are automatically compiled into formal notations based on Symbolic Analysis Laboratory formal notation. Requirements are captured in the UML design and compiled automatically into theorems. Model Checkers are run against the compiled formal model and theorems to detect counterexamples that violate the requirements in the UML model. Semi-formal verification of the design allows us to uncover issues that were previously detected in testing and production stages. The methodology is applied on several automotive systems to show how the framework automates the verification of UML based designs, the de-facto standard for automotive systems design, based on an implicit formal methodology while hiding the cons that discouraged the industry from using it. Additionally, the framework automates ISO-26262 system design verification guideline which would otherwise be verified via human error prone approaches

    Stable Big Bang formation for Einstein's equations: The complete sub-critical regime

    Full text link
    For (t,x)(0,)×TD(t,x) \in (0,\infty)\times\mathbb{T}^D, the generalized Kasner solutions are a family of explicit solutions to various Einstein-matter systems that start out smooth but then develop a Big Bang singularity as t0t \downarrow 0, i.e., curvature blowup along a spacelike hypersurface. The family is parameterized by the Kasner exponents q~1,,q~DR\widetilde{q}_1,\cdots,\widetilde{q}_D \in \mathbb{R}, which satisfy two algebraic constraints. There are heuristics in the mathematical physics literature, going back more than 50 years, suggesting that the Big Bang formation should be stable under perturbations of the Kasner initial data, given say at {t=1}\lbrace t = 1 \rbrace, as long as the exponents are "sub-critical" in the following sense: maxI,J,B=1,,DI<J{q~I+q~Jq~B}<1\mathop{\max_{I,J,B=1,\cdots,D}}_{I < J} \{\widetilde{q}_I+\widetilde{q}_J-\widetilde{q}_B\}<1. Previous works have shown the stability of the singularity under stronger assumptions: 1) the Einstein-scalar field system with D=3D = 3 and q~1q~2q~31/3\widetilde{q}_1 \approx \widetilde{q}_2 \approx \widetilde{q}_3 \approx 1/3 or 2) the Einstein-vacuum equations for D39D \geq 39 with maxI=1,,Dq~I<1/6\max_{I=1,\cdots,D} |\widetilde{q}_I| < 1/6. We prove that the Kasner singularity is dynamically stable for \emph{all} sub-critical Kasner exponents, thereby justifying the heuristics in the full regime where stable monotonic-type curvature blowup is expected. We treat the 1+D1+D-dimensional Einstein-scalar field system for D3D \geq3 and the 1+D1+D dimensional Einstein-vacuum equations for D10D \geq 10. Moreover, for the Einstein-vacuum equations in 1+31+3 dimensions, where instabilities are in general expected, we prove that all singular Kasner solutions have stable Big Bangs under polarized U(1)U(1)-symmetric perturbations of their initial data. Our results hold for open sets of initial data in Sobolev spaces without symmetry, apart from our work on polarized U(1)U(1)-symmetric solutions.Comment: 61 page

    Mechanizing type environments in weak HOAS

    Get PDF
    We provide a paradigmatic case study, about the formalization of System F<:'s type language in the proof assistant Coq. Our approach relies on weak HOAS, for the sake of producing a readable and concise representation of the object language. Actually, we present and discuss two encoding strategies for typing environments which yield a remarkable influence on the whole formalization. Then, on the one hand we develop System F<:'s metatheory, on the other hand we address the equivalence of the two approaches internally to Coq

    AI Hilbert: A New Paradigm for Scientific Discovery by Unifying Data and Background Knowledge

    Full text link
    The discovery of scientific formulae that parsimoniously explain natural phenomena and align with existing background theory is a key goal in science. Historically, scientists have derived natural laws by manipulating equations based on existing knowledge, forming new equations, and verifying them experimentally. In recent years, data-driven scientific discovery has emerged as a viable competitor in settings with large amounts of experimental data. Unfortunately, data-driven methods often fail to discover valid laws when data is noisy or scarce. Accordingly, recent works combine regression and reasoning to eliminate formulae inconsistent with background theory. However, the problem of searching over the space of formulae consistent with background theory to find one that fits the data best is not well-solved. We propose a solution to this problem when all axioms and scientific laws are expressible via polynomial equalities and inequalities and argue that our approach is widely applicable. We further model notions of minimal complexity using binary variables and logical constraints, solve polynomial optimization problems via mixed-integer linear or semidefinite optimization, and prove the validity of our scientific discoveries in a principled manner using Positivestellensatz certificates. Remarkably, the optimization techniques leveraged in this paper allow our approach to run in polynomial time with fully correct background theory, or non-deterministic polynomial (NP) time with partially correct background theory. We demonstrate that some famous scientific laws, including Kepler's Third Law of Planetary Motion, the Hagen-Poiseuille Equation, and the Radiated Gravitational Wave Power equation, can be derived in a principled manner from background axioms and experimental data.Comment: Slightly revised from version 1, in particular polished the figure

    Abella: A System for Reasoning about Relational Specifications

    Get PDF
    International audienceThe Abella interactive theorem prover is based on an intuitionistic logic that allows for inductive and co-inductive reasoning over relations. Abella supports the λ-tree approach to treating syntax containing binders: it allows simply typed λ-terms to be used to represent such syntax and it provides higher-order (pattern) unification, the ∇ quantifier, and nominal constants for reasoning about these representations. As such, it is a suitable vehicle for formalizing the meta-theory of formal systems such as logics and programming languages. This tutorial exposes Abella incrementally, starting with its capabilities at a first-order logic level and gradually presenting more sophisticated features, ending with the support it offers to the two-level logic approach to meta-theoretic reasoning. Along the way, we show how Abella can be used prove theorems involving natural numbers, lists, and automata, as well as involving typed and untyped λ-calculi and the π-calculus
    corecore