2,954 research outputs found

    Refinement by interpretation in {\pi}-institutions

    Get PDF
    The paper discusses the role of interpretations, understood as multifunctions that preserve and reflect logical consequence, as refinement witnesses in the general setting of pi-institutions. This leads to a smooth generalization of the refinement-by-interpretation approach, recently introduced by the authors in more specific contexts. As a second, yet related contribution a basis is provided to build up a refinement calculus of structured specifications in and across arbitrary pi-institutions.Comment: In Proceedings Refine 2011, arXiv:1106.348

    On Role Logic

    Full text link
    We present role logic, a notation for describing properties of relational structures in shape analysis, databases, and knowledge bases. We construct role logic using the ideas of de Bruijn's notation for lambda calculus, an encoding of first-order logic in lambda calculus, and a simple rule for implicit arguments of unary and binary predicates. The unrestricted version of role logic has the expressive power of first-order logic with transitive closure. Using a syntactic restriction on role logic formulas, we identify a natural fragment RL^2 of role logic. We show that the RL^2 fragment has the same expressive power as two-variable logic with counting C^2 and is therefore decidable. We present a translation of an imperative language into the decidable fragment RL^2, which allows compositional verification of programs that manipulate relational structures. In addition, we show how RL^2 encodes boolean shape analysis constraints and an expressive description logic.Comment: 20 pages. Our later SAS 2004 result builds on this wor

    A Denotational Semantics for Communicating Unstructured Code

    Full text link
    An important property of programming language semantics is that they should be compositional. However, unstructured low-level code contains goto-like commands making it hard to define a semantics that is compositional. In this paper, we follow the ideas of Saabas and Uustalu to structure low-level code. This gives us the possibility to define a compositional denotational semantics based on least fixed points to allow for the use of inductive verification methods. We capture the semantics of communication using finite traces similar to the denotations of CSP. In addition, we examine properties of this semantics and give an example that demonstrates reasoning about communication and jumps. With this semantics, we lay the foundations for a proof calculus that captures both, the semantics of unstructured low-level code and communication.Comment: In Proceedings FESCA 2015, arXiv:1503.0437

    Formal Verification of Security Protocol Implementations: A Survey

    Get PDF
    Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approac

    A correspondence between rooted planar maps and normal planar lambda terms

    Get PDF
    A rooted planar map is a connected graph embedded in the 2-sphere, with one edge marked and assigned an orientation. A term of the pure lambda calculus is said to be linear if every variable is used exactly once, normal if it contains no beta-redexes, and planar if it is linear and the use of variables moreover follows a deterministic stack discipline. We begin by showing that the sequence counting normal planar lambda terms by a natural notion of size coincides with the sequence (originally computed by Tutte) counting rooted planar maps by number of edges. Next, we explain how to apply the machinery of string diagrams to derive a graphical language for normal planar lambda terms, extracted from the semantics of linear lambda calculus in symmetric monoidal closed categories equipped with a linear reflexive object or a linear reflexive pair. Finally, our main result is a size-preserving bijection between rooted planar maps and normal planar lambda terms, which we establish by explaining how Tutte decomposition of rooted planar maps (into vertex maps, maps with an isthmic root, and maps with a non-isthmic root) may be naturally replayed in linear lambda calculus, as certain surgeries on the string diagrams of normal planar lambda terms.Comment: Corrected title field in metadat

    Transforming data by calculation

    Get PDF
    Thispaperaddressesthefoundationsofdata-modeltransformation.A catalog of data mappings is presented which includes abstraction and representa- tion relations and associated constraints. These are justified in an algebraic style via the pointfree-transform, a technique whereby predicates are lifted to binary relation terms (of the algebra of programming) in a two-level style encompassing both data and operations. This approach to data calculation, which also includes transformation of recursive data models into “flat” database schemes, is offered as alternative to standard database design from abstract models. The calculus is also used to establish a link between the proposed transformational style and bidi- rectional lenses developed in the context of the classical view-update problem.Fundação para a Ciência e a Tecnologia (FCT
    corecore