8,485 research outputs found

    Building Abstractions

    Get PDF
    The use of abstraction has been largely informal. As a consequence, it has often been difficult to see how or why a particular abstraction works. This paper attempts to help correct this trend by presenting a formal theory of abstraction. We use this theory to characterise the different types of abstraction that can be built; the different classes of abstractions we identify capture the majority of abstractions of which we are aware. We end by proposing a method for automatically building one very common type of abstraction, that used in Abstrips; our proposal is motivated by consideration of the various formal properties that such a method should possess

    Symbolic Abstractions for Quantum Protocol Verification

    Get PDF
    Quantum protocols such as the BB84 Quantum Key Distribution protocol exchange qubits to achieve information-theoretic security guarantees. Many variants thereof were proposed, some of them being already deployed. Existing security proofs in that field are mostly tedious, error-prone pen-and-paper proofs of the core protocol only that rarely account for other crucial components such as authentication. This calls for formal and automated verification techniques that exhaustively explore all possible intruder behaviors and that scale well. The symbolic approach offers rigorous, mathematical frameworks and automated tools to analyze security protocols. Based on well-designed abstractions, it has allowed for large-scale formal analyses of real-life protocols such as TLS 1.3 and mobile telephony protocols. Hence a natural question is: Can we use this successful line of work to analyze quantum protocols? This paper proposes a first positive answer and motivates further research on this unexplored path

    Computational Soundness for Dalvik Bytecode

    Full text link
    Automatically analyzing information flow within Android applications that rely on cryptographic operations with their computational security guarantees imposes formidable challenges that existing approaches for understanding an app's behavior struggle to meet. These approaches do not distinguish cryptographic and non-cryptographic operations, and hence do not account for cryptographic protections: f(m) is considered sensitive for a sensitive message m irrespective of potential secrecy properties offered by a cryptographic operation f. These approaches consequently provide a safe approximation of the app's behavior, but they mistakenly classify a large fraction of apps as potentially insecure and consequently yield overly pessimistic results. In this paper, we show how cryptographic operations can be faithfully included into existing approaches for automated app analysis. To this end, we first show how cryptographic operations can be expressed as symbolic abstractions within the comprehensive Dalvik bytecode language. These abstractions are accessible to automated analysis, and they can be conveniently added to existing app analysis tools using minor changes in their semantics. Second, we show that our abstractions are faithful by providing the first computational soundness result for Dalvik bytecode, i.e., the absence of attacks against our symbolically abstracted program entails the absence of any attacks against a suitable cryptographic program realization. We cast our computational soundness result in the CoSP framework, which makes the result modular and composable.Comment: Technical report for the ACM CCS 2016 conference pape

    Collection analysis for Horn clause programs

    Get PDF
    We consider approximating data structures with collections of the items that they contain. For examples, lists, binary trees, tuples, etc, can be approximated by sets or multisets of the items within them. Such approximations can be used to provide partial correctness properties of logic programs. For example, one might wish to specify than whenever the atom sort(t,s)sort(t,s) is proved then the two lists tt and ss contain the same multiset of items (that is, ss is a permutation of tt). If sorting removes duplicates, then one would like to infer that the sets of items underlying tt and ss are the same. Such results could be useful to have if they can be determined statically and automatically. We present a scheme by which such collection analysis can be structured and automated. Central to this scheme is the use of linear logic as a omputational logic underlying the logic of Horn clauses

    Generating Bijections between HOAS and the Natural Numbers

    Full text link
    A provably correct bijection between higher-order abstract syntax (HOAS) and the natural numbers enables one to define a "not equals" relationship between terms and also to have an adequate encoding of sets of terms, and maps from one term family to another. Sets and maps are useful in many situations and are preferably provided in a library of some sort. I have released a map and set library for use with Twelf which can be used with any type for which a bijection to the natural numbers exists. Since creating such bijections is tedious and error-prone, I have created a "bijection generator" that generates such bijections automatically together with proofs of correctness, all in the context of Twelf.Comment: In Proceedings LFMTP 2010, arXiv:1009.218

    Nominal Logic Programming

    Full text link
    Nominal logic is an extension of first-order logic which provides a simple foundation for formalizing and reasoning about abstract syntax modulo consistent renaming of bound names (that is, alpha-equivalence). This article investigates logic programming based on nominal logic. We describe some typical nominal logic programs, and develop the model-theoretic, proof-theoretic, and operational semantics of such programs. Besides being of interest for ensuring the correct behavior of implementations, these results provide a rigorous foundation for techniques for analysis and reasoning about nominal logic programs, as we illustrate via examples.Comment: 46 pages; 19 page appendix; 13 figures. Revised journal submission as of July 23, 200

    When are Stochastic Transition Systems Tameable?

    Full text link
    A decade ago, Abdulla, Ben Henda and Mayr introduced the elegant concept of decisiveness for denumerable Markov chains [1]. Roughly speaking, decisiveness allows one to lift most good properties from finite Markov chains to denumerable ones, and therefore to adapt existing verification algorithms to infinite-state models. Decisive Markov chains however do not encompass stochastic real-time systems, and general stochastic transition systems (STSs for short) are needed. In this article, we provide a framework to perform both the qualitative and the quantitative analysis of STSs. First, we define various notions of decisiveness (inherited from [1]), notions of fairness and of attractors for STSs, and make explicit the relationships between them. Then, we define a notion of abstraction, together with natural concepts of soundness and completeness, and we give general transfer properties, which will be central to several verification algorithms on STSs. We further design a generic construction which will be useful for the analysis of {\omega}-regular properties, when a finite attractor exists, either in the system (if it is denumerable), or in a sound denumerable abstraction of the system. We next provide algorithms for qualitative model-checking, and generic approximation procedures for quantitative model-checking. Finally, we instantiate our framework with stochastic timed automata (STA), generalized semi-Markov processes (GSMPs) and stochastic time Petri nets (STPNs), three models combining dense-time and probabilities. This allows us to derive decidability and approximability results for the verification of these models. Some of these results were known from the literature, but our generic approach permits to view them in a unified framework, and to obtain them with less effort. We also derive interesting new approximability results for STA, GSMPs and STPNs.Comment: 77 page

    Multi-level Contextual Type Theory

    Full text link
    Contextual type theory distinguishes between bound variables and meta-variables to write potentially incomplete terms in the presence of binders. It has found good use as a framework for concise explanations of higher-order unification, characterize holes in proofs, and in developing a foundation for programming with higher-order abstract syntax, as embodied by the programming and reasoning environment Beluga. However, to reason about these applications, we need to introduce meta^2-variables to characterize the dependency on meta-variables and bound variables. In other words, we must go beyond a two-level system granting only bound variables and meta-variables. In this paper we generalize contextual type theory to n levels for arbitrary n, so as to obtain a formal system offering bound variables, meta-variables and so on all the way to meta^n-variables. We obtain a uniform account by collapsing all these different kinds of variables into a single notion of variabe indexed by some level k. We give a decidable bi-directional type system which characterizes beta-eta-normal forms together with a generalized substitution operation.Comment: In Proceedings LFMTP 2011, arXiv:1110.668
    • 

    corecore