6 research outputs found

    Program analysis is harder than verification: A computability perspective

    Get PDF
    We study from a computability perspective static program analysis, namely detecting sound program assertions, and verification, namely sound checking of program assertions. We first design a general computability model for domains of program assertions and correspond- ing program analysers and verifiers. Next, we formalize and prove an instantiation of Rice\u2019s theorem for static program analysis and verifica- tion. Then, within this general model, we provide and show a precise statement of the popular belief that program analysis is a harder prob- lem than program verification: we prove that for finite domains of pro- gram assertions, program analysis and verification are equivalent prob- lems, while for infinite domains, program analysis is strictly harder than verification

    Testing concolic execution through consistency checks

    Get PDF
    Symbolic execution is a well-known software testing technique that evaluates how a program runs when considering a symbolic input, i.e., an input that can initially assume any concrete value admissible for its data type. The dynamic twist of this technique is dubbed concolic execution and has been demonstrated to be a practical technique for testing even complex real-world programs. Unfortunately, developing concolic engines is hard. Indeed, an engine has to correctly instrument the program to build accurate symbolic expressions, which represent the program computation. Furthermore, to reason over such expressions, it has to interact with an SMT solver. Hence, several implementation bugs may emerge within the different layers of an engine. In this article, we consider the problem of testing concolic engines. In particular, we propose several testing strategies whose main intuition is to exploit the concrete state kept by the executor to identify inconsistencies within the symbolic state. We integrated our strategies into three state-of-the-art concolic executors (SymCC, SymQEMU, and Fuzzolic, respectively) and then performed several experiments to show that our ideas can find bugs in these frameworks. Overall, our approach was able to discover more than 12 bugs across these engines

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Analysing the Program Analyser

    No full text
    It is the year 2025. FutureCorp, the private sector defence contractor to whom the US government has outsourced its management of nuclear weapons, has just had its missile control software hijacked by terrorists. It is only a matter of hours before Armageddon. The CEO of FutureCorp, Dr F. Methods, is incredulous: “This is impossible”, he told one of our reporters. “We used program analysis to formally prove that the software was secure!”. “Ah, Dr Methods,” responded open-source developer Mr B. Door, “But did you check the analyser?
    corecore