72 research outputs found

    Advances in Usability of Formal Methods for Code Verification with Frama-C

    Get PDF
    Industrial usage of code analysis tools based on semantic analysis, such as the Frama-C platform, poses several challenges, from the setup of analyses to the exploitation of their results.  In this paper, we discuss two of these challenges.  First, such analyses require detailed information about the code structure and the build process, which are often not documented, being part of the implicit build chain used by the developers.  Unlike heuristics-based tools, which can deal with incomplete information, semantics-based tools require stubs or specifications for external library functions, compiler builtins, non-standard extensions, etc.  Setting up a new analysis has a high cost, which precludes industrial users from trying such tools, since the return on investment is not clear in advance: the analysis may reveal itself of little use w.r.t. the invested time.  Improving the usability of this first step is essential for the widespread adoption of formal methods in software development.  A second aspect that is essential for successful analyses is understanding the data and navigating it.  Visualizing data and rendering it in an interactive manner allows users to considerably speed up the process of refining the analysis results.  We present some approaches to both of these issues, derived from experience with code bases given by industrial partners

    SPEEDY: An Eclipse-based IDE for invariant inference

    Full text link
    SPEEDY is an Eclipse-based IDE for exploring techniques that assist users in generating correct specifications, particularly including invariant inference algorithms and tools. It integrates with several back-end tools that propose invariants and will incorporate published algorithms for inferring object and loop invariants. Though the architecture is language-neutral, current SPEEDY targets C programs. Building and using SPEEDY has confirmed earlier experience demonstrating the importance of showing and editing specifications in the IDEs that developers customarily use, automating as much of the production and checking of specifications as possible, and showing counterexample information directly in the source code editing environment. As in previous work, automation of specification checking is provided by back-end SMT solvers. However, reducing the effort demanded of software developers using formal methods also requires a GUI design that guides users in writing, reviewing, and correcting specifications and automates specification inference.Comment: In Proceedings F-IDE 2014, arXiv:1404.578

    OpenJML: Software verification for Java 7 using JML, OpenJDK, and Eclipse

    Full text link
    OpenJML is a tool for checking code and specifications of Java programs. We describe our experience building the tool on the foundation of JML, OpenJDK and Eclipse, as well as on many advances in specification-based software verification. The implementation demonstrates the value of integrating specification tools directly in the software development IDE and in automating as many tasks as possible. The tool, though still in progress, has now been used for several college-level courses on software specification and verification and for small-scale studies on existing Java programs.Comment: In Proceedings F-IDE 2014, arXiv:1404.578

    Exact Gap Computation for Code Coverage Metrics in ISO-C

    Full text link
    Test generation and test data selection are difficult tasks for model based testing. Tests for a program can be meld to a test suite. A lot of research is done to quantify the quality and improve a test suite. Code coverage metrics estimate the quality of a test suite. This quality is fine, if the code coverage value is high or 100%. Unfortunately it might be impossible to achieve 100% code coverage because of dead code for example. There is a gap between the feasible and theoretical maximal possible code coverage value. Our review of the research indicates, none of current research is concerned with exact gap computation. This paper presents a framework to compute such gaps exactly in an ISO-C compatible semantic and similar languages. We describe an efficient approximation of the gap in all the other cases. Thus, a tester can decide if more tests might be able or necessary to achieve better coverage.Comment: In Proceedings MBT 2012, arXiv:1202.582

    Leveraging Static Analysis Tools for Improving Usability of Memory Error Sanitization Compilers

    Get PDF
    Memory errors such as buffer overruns are notorious security vulnerabilities. There has been considerable interest in having a compiler to ensure the safety of compiled code either through static verification or through instrumented runtime checks. While certifying compilation has shown much promise, it has not been practical, leaving code instrumentation as the next best strategy for compilation. We term such compilers Memory Error Sanitization Compilers (MESCs). MESCs are available as part of GCC, LLVM and MSVC suites. Due to practical limitations, MESCs typically apply instrumentation indiscriminately to every memory access, and are consequently prohibitively expensive and practical to only small code bases. This work proposes a methodology that applies state-of-the-art static analysis techniques to eliminate unnecessary runtime checks, resulting in more efficient and scalable defenses. The methodology was implemented on LLVM\u27s Safecode, Integer Overflow, and Address Sanitizer passes, using static analysis of Frama-C and Codesurfer. The benchmarks demonstrate an improvement in runtime performance that makes incorporation of runtime checks a viable option for defenses

    VerifyThis 2019:A Program Verification Competition (Extended Report)

    Get PDF
    VerifyThis is a series of program verification competitions that emphasize the human aspect: participants tackle the verification of detailed behavioral properties -- something that lies beyond the capabilities of fully automatic verification, and requires instead human expertise to suitably encode programs, specifications, and invariants. This paper describes the 8th edition of VerifyThis, which took place at ETAPS 2019 in Prague. Thirteen teams entered the competition, which consisted of three verification challenges and spanned two days of work. The report analyzes how the participating teams fared on these challenges, reflects on what makes a verification challenge more or less suitable for the typical VerifyThis participants, and outlines the difficulties of comparing the work of teams using wildly different verification approaches in a competition focused on the human aspect

    The Requirements Editor RED

    Get PDF

    Exploring annotations for deductive verification

    Get PDF
    • …
    corecore