279 research outputs found

    Reusing constraint proofs in symbolic analysis

    Get PDF
    Symbolic analysis is an important element of program verification and automatic testing. Symbolic analysis techniques abstract program properties as expressions of symbolic input values to characterise the program logical constraints, and rely on Satisfiability Modulo Theories (SMT) solvers to both validate the satisfiability of the constraint expression and verify the corresponding program properties. Despite the impressive improvements of constraint solving and the availability of mature solvers, constraint solving still represents a main bottleneck towards efficient and scalable symbolic program analysis. The work on the SMT bottleneck proceeds along two main research lines: (i) optimisation approaches that assist and complement the solvers in the context of the program analysis in various ways, and (ii) reuse approaches that reduce the invocation of constraint solvers, by reusing proofs while solving constraints during symbolic analysis. This thesis contributes to the research in reuse approaches, with REusing-Constraint- proofs-in-symbolic-AnaLysis (ReCal), a new approach for reusing proofs across constraints that recur during analysis. ReCal advances over state-of-the-art approaches for reusing constraints by (i) proposing a novel canonical form to efficiently store and retrieve equivalent and related-by- implication constraints, and (ii) defining a parallel framework for GPU-based platforms to optimise the storage and retrieval of constraints and reusable proofs. Equivalent constraints vary widely due to the program specific details. This thesis defines a canonical form of constraints in the context of symbolic analysis, and develops an original canonicalisation algorithm to generate the canonical form. The canonical form turns the complex problem of deciding the equivalence of two constraints to the simple problem of comparing for equality their canonical forms, thus enabling efficient catching recurring constraints during symbolic analysis. Constraints can become extremely large when analysing complex systems, and handling large constraints may introduce a heavy overhead, thus harming the scalability of proof-reusing approaches. The ReCal parallel framework largely improves both the performance and scalability of reusing proofs by benefitting from Graphics Processing Units (GPU) platforms that provide thousands of computing units working in parallel. The parallel ReCal framework ReCal-gpu achieves a 10- times speeding up in constraint solving during symbolic execution of various programs

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    Automated incremental software verification

    Get PDF
    Software continuously evolves to meet rapidly changing human needs. Each evolved transformation of a program is expected to preserve important correctness and security properties. Aiming to assure program correctness after a change, formal verification techniques, such as Software Model Checking, have recently benefited from fully automated solutions based on symbolic reasoning and abstraction. However, the majority of the state-of-the-art model checkers are designed that each new software version has to be verified from scratch. In this dissertation, we investigate the new Formal Incremental Verification (FIV) techniques that aim at making software analysis more efficient by reusing invested efforts between verification runs. In order to show that FIV can be built on the top of different verification techniques, we focus on three complementary approaches to automated formal verification. First, we contribute the FIV technique for SAT-based Bounded Model Checking developed to verify programs with (possibly recursive) functions with respect to the set of pre-defined assertions. We present the function-summarization framework based on Craig interpolation that allows extracting and reusing over- approximations of the function behaviors. We introduce the algorithm to revalidate the summaries of one program locally in order to prevent re-verification of another program from scratch. Second, we contribute the technique for simulation relation synthesis for loop-free programs that do not necessarily contain assertions. We introduce an SMT-based abstraction- refinement algorithm that proceeds by guessing a relation and checking whether it is a simulation relation. We present a novel algorithm for discovering simulations symbolically, by means of solving ∀∃-formulas and extracting witnessing Skolem relations. Third, we contribute the FIV technique for SMT-based Unbounded Model Checking developed to verify programs with (possibly nested) loops. We present an algorithm that automatically derives simulations between programs with different loop structures. The automatically synthesized simulation relation is then used to migrate the safe inductive invariants across the evolution boundaries. Finally, we contribute the implementation and evaluation of all our algorithmic contributions, and confirm that the state-of-the-art model checking tools can successfully be extended by the FIV capabilities

    Programming with Specifications

    Get PDF
    This thesis explores the use of specifications for the construction of correct programs. We go beyond their standard use as run-time assertions, and present algorithms, techniques and implementations for the tasks of 1) program verification, 2) declarative programming and 3) software synthesis. These results are made possible by our advances in the domains of decision procedure design and implementation. In the first part of this thesis, we present a decidability result for a class of logics that support user-defined recursive function definitions. Constraints in this class can encode expressive properties of recursive data structures, such as sortedness of a list, or balancing of a search tree. As a result, complex verification conditions can be stated concisely and solved entirely automatically. We also present a new decision procedure for a logic to reason about sets and constraints over their cardinalities. The key insight lies in a technique to decompose con- straints according to mutual dependencies. Compared to previous techniques, our algorithm brings significant improvements in running times, and for the first time integrates reasoning about cardinalities within the popular DPLL(T ) setting. We integrated our algorithmic ad- vances into Leon, a static analyzer for functional programs. Leon can reason about constraints involving arbitrary recursive function definitions, and has the desirable theoretical property that it will always find counter-examples to assertions that do not hold. We illustrate the flexibility and efficiency of Leon through experimental evaluation, where we used it to prove detailed correctness properties of data structure implementations. We then illustrate how program specifications can be used as a high-level programming construct ; we present Kaplan, an extension of Scala with first-class logical constraints. Kaplan allows programmers to create, manipulate and combine constraints as they would any other data structure. Our implementation of Kaplan illustrates how declarative programming can be incorporated into an existing mainstream programming language. Moreover, we examine techniques to transform, at compile-time, program specifications into efficient executable code. This approach of software synthesis combines the correctness benefits of declarative programming with the efficiency of imperative or functional programming

    High-Performance and Power-Aware Graph Processing on GPUs

    Get PDF
    Graphs are a common representation in many problem domains, including engineering, finance, medicine, and scientific applications. Different problems map to very large graphs, often involving millions of vertices. Even though very efficient sequential implementations of graph algorithms exist, they become impractical when applied on such actual very large graphs. On the other hand, graphics processing units (GPUs) have become widespread architectures as they provide massive parallelism at low cost. Parallel execution on GPUs may achieve speedup up to three orders of magnitude with respect to the sequential counterparts. Nevertheless, accelerating efficient and optimized sequential algorithms and porting (i.e., parallelizing) their implementation to such many-core architectures is a very challenging task. The task is made even harder since energy and power consumption are becoming constraints in addition, or in same case as an alternative, to performance. This work aims at developing a platform that provides (I) a library of parallel, efficient, and tunable implementations of the most important graph algorithms for GPUs, and (II) an advanced profiling model to analyze both performance and power consumption of the algorithm implementations. The platform goal is twofold. Through the library, it aims at saving developing effort in the parallelization task through a primitive-based approach. Through the profiling framework, it aims at customizing such primitives by considering both the architectural details and the target efficiency metrics (i.e., performance or power)

    Grained integers and applications to cryptography

    Get PDF
    To meet the requirements of the modern communication society, cryptographic techniques are of central importance. In modern cryptography, we try to build cryptographic primitives, whose security can be reduced to solving a particular number theoretic problem for which no fast algorithmic method is known by now. Thus, any advance in the understanding of the nature of such problems indirectly gives insight in the analysis of some of the most practical cryptographic techniques. In this work we analyze exactly this aspect much more deeply: How can we use some of the purely theoretical results in number theory to answer very practical questions on the security of widely used cryptographic algorithms and how can we use such results in concrete implementations? While trying to answer these kinds of security-related questions, we always think two-fold: From a cryptographic, security-ensuring perspective and from a cryptanalytic one. After we outlined -- with a special focus on the historical development of these results -- the necessary analytic and algorithmic foundations of number theory, we first delve into the question how point addition on certain elliptic curves can be done efficiently. The resulting formulas have their application in the cryptanalysis of crypto systems that are insecure if factoring integers can be done efficiently. The rest of the thesis is devoted to the study of integers, all of whose prime factors are neither too small nor too large. We show with the help of two applications how one can use the properties of such kinds of integers to answer very practical questions in the design and the analysis of cryptographic primitives: The optimization of a hardware-realization of the cofactorization step of the General Number Field Sieve and the analysis of different standardized key-generation algorithms
    • 

    corecore