45,607 research outputs found

    Balancing lists: a proof pearl

    Get PDF
    Starting with an algorithm to turn lists into full trees which uses non-obvious invariants and partial functions, we progressively encode the invariants in the types of the data, removing most of the burden of a correctness proof. The invariants are encoded using non-uniform inductive types which parallel numerical representations in a style advertised by Okasaki, and a small amount of dependent types.Comment: To appear in proceedings of Interactive Theorem Proving (2014

    Certified Symbolic Manipulation: Bivariate Simplicial Polynomials

    Get PDF
    Certified symbolic manipulation is an emerging new field where programs are accompanied by certificates that, suitably interpreted, ensure the correctness of the algorithms. In this paper, we focus on algebraic algorithms implemented in the proof assistant ACL2, which allows us to verify correctness in the same programming environment. The case study is that of bivariate simplicial polynomials, a data structure used to help the proof of properties in Simplicial Topology. Simplicial polynomials can be computationally interpreted in two ways. As symbolic expressions, they can be handled algorithmically, increasing the automation in ACL2 proofs. As representations of functional operators, they help proving properties of categorical morphisms. As an application of this second view, we present the definition in ACL2 of some morphisms involved in the Eilenberg-Zilber reduction, a central part of the Kenzo computer algebra system. We have proved the ACL2 implementations are correct and tested that they get the same results as Kenzo does.Ministerio de Ciencia e Innovación MTM2009-13842Unión Europea nr. 243847 (ForMath

    Verifying correctness of persistent concurrent data structures: a sound and complete method

    Get PDF
    Non-volatile memory (NVM), aka persistent memory, is a new memory paradigm that preserves its contents even after power loss. The expected ubiquity of NVM has stimulated interest in the design of persistent concurrent data structures, together with associated notions of correctness. In this paper, we present a formal proof technique for durable linearizability, which is a correctness criterion that extends linearizability to handle crashes and recovery in the context ofNVM.Our proofs are based on refinement of Input/Output automata (IOA) representations of concurrent data structures. To this end, we develop a generic procedure for transforming any standard sequential data structure into a durable specification and prove that this transformation is both sound and complete. Since the durable specification only exhibits durably linearizable behaviours, it serves as the abstract specification in our refinement proof. We exemplify our technique on a recently proposed persistentmemory queue that builds on Michael and Scott’s lock-free queue. To support the proofs, we describe an automated translation procedure from code to IOA and a thread-local proof technique for verifying correctness of invariants

    A Graph Rewriting Approach for Transformational Design of Digital Systems

    Get PDF
    Transformational design integrates design and verification. It combines “correctness by construction” and design creativity by the use of pre-proven behaviour preserving transformations as design steps. The formal aspects of this methodology are hidden in the transformations. A constraint is the availability of a design representation with a compositional formal semantics. Graph representations are useful design representations because of their visualisation of design information. In this paper graph rewriting theory, as developed in the last twenty years in mathematics, is shown to be a useful basis for a formal framework for transformational design. The semantic aspects of graphs which are no part of graph rewriting theory are included by the use of attributed graphs. The used attribute algebra, table algebra, is a relation algebra derived from database theory. The combination of graph rewriting, table algebra and transformational design is new

    Coinductive Formal Reasoning in Exact Real Arithmetic

    Full text link
    In this article we present a method for formally proving the correctness of the lazy algorithms for computing homographic and quadratic transformations -- of which field operations are special cases-- on a representation of real numbers by coinductive streams. The algorithms work on coinductive stream of M\"{o}bius maps and form the basis of the Edalat--Potts exact real arithmetic. We use the machinery of the Coq proof assistant for the coinductive types to present the formalisation. The formalised algorithms are only partially productive, i.e., they do not output provably infinite streams for all possible inputs. We show how to deal with this partiality in the presence of syntactic restrictions posed by the constructive type theory of Coq. Furthermore we show that the type theoretic techniques that we develop are compatible with the semantics of the algorithms as continuous maps on real numbers. The resulting Coq formalisation is available for public download.Comment: 40 page

    On the Use of Underspecified Data-Type Semantics for Type Safety in Low-Level Code

    Full text link
    In recent projects on operating-system verification, C and C++ data types are often formalized using a semantics that does not fully specify the precise byte encoding of objects. It is well-known that such an underspecified data-type semantics can be used to detect certain kinds of type errors. In general, however, underspecified data-type semantics are unsound: they assign well-defined meaning to programs that have undefined behavior according to the C and C++ language standards. A precise characterization of the type-correctness properties that can be enforced with underspecified data-type semantics is still missing. In this paper, we identify strengths and weaknesses of underspecified data-type semantics for ensuring type safety of low-level systems code. We prove sufficient conditions to detect certain classes of type errors and, finally, identify a trade-off between the complexity of underspecified data-type semantics and their type-checking capabilities.Comment: In Proceedings SSV 2012, arXiv:1211.587

    Affine functions and series with co-inductive real numbers

    Get PDF
    We extend the work of A. Ciaffaglione and P. Di Gianantonio on mechanical verification of algorithms for exact computation on real numbers, using infinite streams of digits implemented as co-inductive types. Four aspects are studied: the first aspect concerns the proof that digit streams can be related to the axiomatized real numbers that are already axiomatized in the proof system (axiomatized, but with no fixed representation). The second aspect re-visits the definition of an addition function, looking at techniques to let the proof search mechanism perform the effective construction of an algorithm that is correct by construction. The third aspect concerns the definition of a function to compute affine formulas with positive rational coefficients. This should be understood as a testbed to describe a technique to combine co-recursion and recursion to obtain a model for an algorithm that appears at first sight to be outside the expressive power allowed by the proof system. The fourth aspect concerns the definition of a function to compute series, with an application on the series that is used to compute Euler's number e. All these experiments should be reproducible in any proof system that supports co-inductive types, co-recursion and general forms of terminating recursion, but we performed with the Coq system [12, 3, 14]

    Efficient algorithms for computing the Euler-Poincar\'e characteristic of symmetric semi-algebraic sets

    Full text link
    Let R\mathrm{R} be a real closed field and DR\mathrm{D} \subset \mathrm{R} an ordered domain. We consider the algorithmic problem of computing the generalized Euler-Poincar\'e characteristic of real algebraic as well as semi-algebraic subsets of Rk\mathrm{R}^k, which are defined by symmetric polynomials with coefficients in D\mathrm{D}. We give algorithms for computing the generalized Euler-Poincar\'e characteristic of such sets, whose complexities measured by the number the number of arithmetic operations in D\mathrm{D}, are polynomially bounded in terms of kk and the number of polynomials in the input, assuming that the degrees of the input polynomials are bounded by a constant. This is in contrast to the best complexity of the known algorithms for the same problems in the non-symmetric situation, which are singly exponential. This singly exponential complexity for the latter problem is unlikely to be improved because of hardness result (#P\#\mathbf{P}-hardness) coming from discrete complexity theory.Comment: 29 pages, 1 Figure. arXiv admin note: substantial text overlap with arXiv:1312.658
    corecore