400 research outputs found

    The RERS challenge: towards controllable and scalable benchmark synthesis

    Get PDF
    This paper (1) summarizes the history of the RERS challenge for the analysis and verification of reactive systems, its profile and intentions, its relation to other competitions, and, in particular, its evolution due to the feedback of participants, and (2) presents the most recent development concerning the synthesis of hard benchmark problems. In particular, the second part proposes a way to tailor benchmarks according to the depths to which programs have to be investigated in order to find all errors. This gives benchmark designers a method to challenge contributors that try to perform well by excessive guessing

    Synthesizing realistic verification tasks

    Get PDF
    This thesis by publications focuses on realistic benchmarks for software verification approaches. Such benchmarks are crucial to an evaluation of verification tools which helps to assess their capabilities and inform potential users. This work provides an overview of the current landscape of verification tool evaluation and compares manual and automatic approaches to benchmark generation. The main contribution of this thesis is a new framework to synthesize realistic verification tasks. This framework allows to generate verification tasks that target sequential or parallel programs. Starting from a realistic formal specification, a BĂĽchi automaton is synthesized while ensuring realistic hardness characteristics such as the number of computation steps after which errors occur. The resulting automaton is then transformed to a Mealy machine to produce a sequential program in C or Java or to a parallel composition of modal transition systems. A refinement of the latter is encoded in Promela or as a Petri net. A task that targets such a parallel system requires checking whether or not a given interruptible temporal property is satisfied or whether parallel systems are weakly bisimilar. Temporal properties may include branching-time and linear-time formulas. For the latter, it can be ensured that every parallel component matters during verification. This thesis contains additional contributions that build on top of attached publications. These are (i) a generalization of interruptibility that covers branching-time properties, (ii) an improved generation of parallel contexts, and (iii) a definition of alphabet extension on a semantic level. Alphabet extensions are a key part for ensuring hardness of generated tasks that target parallel systems. Benchmarks that were synthesized using the presented framework have been employed in the international Rigorous Examination of Reactive Systems (RERS) Challenge during the last five years. Several international teams attempted to solve the corresponding verification tasks and used ten different tools to verify the newly added parallel programs. Apart from the evaluation of these tools, this endeavor motivated participants of RERS to conceive new formal techniques to verify parallel systems. The result of this thesis thus helps to improve the state of the art of software verification

    Domain-Specific Symbolic Compilation

    Get PDF
    A symbolic compiler translates a program to symbolic constraints, automatically reducing model checking and synthesis to constraint solving. We show that new applications of constraint solving require domain-specific encodings that yield the required orders of magnitude improvements in solver efficiency. Unfortunately, these encodings cannot be obtained with today\u27s symbolic compilation. We introduce symbolic languages that encapsulate domain-specific encodings under abstractions that behave as their non-symbolic counterparts: client code using the abstractions can be tested and debugged on concrete inputs. When client code is symbolically compiled, the resulting constraints use domain-specific encodings. We demonstrate the idea on the first fully symbolic checker of type systems; a program partitioner; and a parallelizer of tree computations. In each of these case studies, symbolic languages improved on classical symbolic compilers by orders of magnitude

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 12224 and 12225 constitutes the refereed proceedings of the 32st International Conference on Computer Aided Verification, CAV 2020, held in Los Angeles, CA, USA, in July 2020.* The 43 full papers presented together with 18 tool papers and 4 case studies, were carefully reviewed and selected from 240 submissions. The papers were organized in the following topical sections: Part I: AI verification; blockchain and Security; Concurrency; hardware verification and decision procedures; and hybrid and dynamic systems. Part II: model checking; software verification; stochastic systems; and synthesis. *The conference was held virtually due to the COVID-19 pandemic

    WEIZZ: Automatic grey-box fuzzing for structured binary formats

    Get PDF
    Fuzzing technologies have evolved at a fast pace in recent years, revealing bugs in programs with ever increasing depth and speed. Applications working with complex formats are however more difficult to take on, as inputs need to meet certain format-specific characteristics to get through the initial parsing stage and reach deeper behaviors of the program. Unlike prior proposals based on manually written format specifications, we propose a technique to automatically generate and mutate inputs for unknown chunk-based binary formats. We identify dependencies between input bytes and comparison instructions, and use them to assign tags that characterize the processing logic of the program. Tags become the building block for structure-aware mutations involving chunks and fields of the input. Our technique can perform comparably to structure-aware fuzzing proposals that require human assistance. Our prototype implementation WEIZZ revealed 16 unknown bugs in widely used programs

    WEIZZ: Automatic Grey-box Fuzzing for Structured Binary Formats

    Full text link
    Fuzzing technologies have evolved at a fast pace in recent years, revealing bugs in programs with ever increasing depth and speed. Applications working with complex formats are however more difficult to take on, as inputs need to meet certain format-specific characteristics to get through the initial parsing stage and reach deeper behaviors of the program. Unlike prior proposals based on manually written format specifications, in this paper we present a technique to automatically generate and mutate inputs for unknown chunk-based binary formats. We propose a technique to identify dependencies between input bytes and comparison instructions, and later use them to assign tags that characterize the processing logic of the program. Tags become the building block for structure-aware mutations involving chunks and fields of the input. We show that our techniques performs comparably to structure-aware fuzzing proposals that require human assistance. Our prototype implementation WEIZZ revealed 16 unknown bugs in widely used programs
    • …
    corecore