4 research outputs found

    Checking Observational Purity of Procedures

    Full text link
    Verifying whether a procedure is observationally pure is useful in many software engineering scenarios. An observationally pure procedure always returns the same value for the same argument, and thus mimics a mathematical function. The problem is challenging when procedures use private mutable global variables, e.g., for memoization of frequently returned answers, and when they involve recursion. We present a novel verification approach for this problem. Our approach involves encoding the procedure's code as a formula that is a disjunction of path constraints, with the recursive calls being replaced in the formula with references to a mathematical function symbol. Then, a theorem prover is invoked to check whether the formula that has been constructed agrees with the function symbol referred to above in terms of input-output behavior for all arguments. We evaluate our approach on a set of realistic examples, using the Boogie intermediate language and theorem prover. Our evaluation shows that the invariants are easy to construct manually, and that our approach is effective at verifying observationally pure procedures.Comment: FASE 201

    Analysis of Software Patches Using Numerical Abstract Interpretation

    Get PDF
    International audienceWe present a static analysis for software patches. Given two syntactically close versions of a program, our analysis can infer a semantic difference, and prove that both programs compute the same outputs when run on the same inputs. Our method is based on abstract interpretation, and parametric in the choice of an abstract domain. We focus on numeric properties only. Our method is able to deal with unbounded executions of infinite-state programs, reading from infinite input streams. Yet, it is limited to comparing terminating executions, ignoring non terminating ones.We first present a novel concrete collecting semantics, expressing the behaviors of both programs at the same time. Then, we propose an abstraction of infinite input streams able to prove that programs that read from the same stream compute equal output values. We then show how to leverage classic numeric abstract domains, such as polyhedra or octagons, to build an effective static analysis. We also introduce a novel numeric domain to bound differences between the values of the variables in the two programs, which has linear cost, and the right amount of relationality to express useful properties of software patches.We implemented a prototype and experimented on a few small examples from the literature. Our prototype operates on a toy language, and assumes a joint syntactic representation of two versions of a program given, which distinguishes between common and distinctive parts

    Tracelet-based code search in executables

    No full text

    Modular demand-driven analysis of semantic difference for program versions

    No full text
    In this work we present a modular and demand-driven analysis of the semantic difference between program versions. Our analysis characterizes initial states for which final states in the pro- gram versions are different. It also characterizes states for which the final states are identical. Such characterizations are useful for regression veri- fication, for revealing security vulnerabilities, and for identifying changes in the program's functionality. Syntactic changes in program versions are often small and local and may apply to procedures that are deep in the procedure call graph. Our approach analyses only those parts of the programs that are affected by the changes. Moreover, the analysis is modular, applied to a single pair of procedures at a time. Called procedures are not inlined. Rather, their previously computed summaries and difference summary are used. For efficiency, procedure summaries and difference summaries can be abstracted and may be refined on-demand. We implemented our method and applied it to finding semantic difference between program versions. We compared it to well established tools and observed speedups of one order of magnitude and more. Further, in many cases our tool could prove equivalence or find difierences, while the others failed to do so
    corecore