12 research outputs found

    Evaluating software verification systems: benchmarks and competitions

    Get PDF
    This report documents the program and the outcomes of Dagstuhl Seminar 14171 “Evaluating Software Verification Systems: Benchmarks and Competitions”. The seminar brought together a large group of current and future competition organizers and participants, benchmark maintainers, as well as practitioners and researchers interested in the topic. The seminar was conducted as a highly interactive event, with a wide spectrum of contributions from participants, including talks, tutorials, posters, tool demonstrations, hands-on sessions, and a live competition

    The RERS challenge: towards controllable and scalable benchmark synthesis

    Get PDF
    This paper (1) summarizes the history of the RERS challenge for the analysis and verification of reactive systems, its profile and intentions, its relation to other competitions, and, in particular, its evolution due to the feedback of participants, and (2) presents the most recent development concerning the synthesis of hard benchmark problems. In particular, the second part proposes a way to tailor benchmarks according to the depths to which programs have to be investigated in order to find all errors. This gives benchmark designers a method to challenge contributors that try to perform well by excessive guessing

    A path-precise analysis for property synthesis

    Get PDF
    technical reportRecent systems such as SLAM, Metal, and ESP help programmers by automating reasoning about the correctness of temporal program properties. This paper presents a technique called property synthesis, which can be viewed as the inverse of property checking. We show that the code for some program properties, such as proper lock acquisition, can be automatically inserted rather than automatically verified. Whereas property checking analyzes a program to verify that property code was inserted correctly, property synthesis analyzes a program to identify where property code should be inserted. This paper describes a path-sensitive analysis that is precise enough to synthesize property code effectively. Unlike other path-sensitive analyses, our intra-procedural path-precise analysis can describe behavior that occurs in loops without approximations. This precision is achieved by computing analysis results as a set of path machines. Each path machine describes assignment behavior of a boolean variable along all paths precisely. This paper explains how path machines work, are computed, and are used to synthesize code

    Towards language-to-language transformation

    Get PDF
    This paper proposes a simplicity-oriented approach and framework for language-to-language transformation of, in particular, graphical languages. Key to simplicity is the decomposition of the transformation specification into sub-rule systems that separately specify purpose-specific aspects. We illustrate this approach by employing a variation of Plotkin’s Structural Operational Semantics (SOS) for pattern-based transformations of typed graphs in order to address the aspect ‘computation’ in a graph rewriting fashion. Key to our approach are two generalizations of Plotkin’s structural rules: the use of graph patterns as the matching concept in the rules, and the introduction of node and edge types. Types do not only allow one to easily distinguish between different kinds of dependencies, like control, data, and priority, but may also be used to define a hierarchical layering structure. The resulting Type-based Structural Operational Semantics (TSOS) supports a well-structured and intuitive specification and realization of semantically involved language-to-language transformations adequate for the generation of purpose-specific views or input formats for certain tools, like, e.g., model checkers. A comparison with the general-purpose transformation frameworks ATL and Groove, illustrates along the educational setting of our graphical WebStory language that TSOS provides quite a flexible format for the definition of a family of purpose-specific transformation languages that are easy to use and come with clear guarantees

    Language-Driven Engineering An Interdisciplinary Software Development Paradigm

    Full text link
    We illustrate how purpose-specific, graphical modeling enables application experts with different levels of expertise to collaboratively design and then produce complex applications using their individual, purpose-specific modeling language. Our illustration includes seven graphical Integrated Modeling Environments (IMEs) that support full code generation, as well as four browser-based applications that were modeled and then fully automatically generated and produced using DIME, our most complex graphical IME. While the seven IMEs were chosen to illustrate the types of languages we support with our Language-Driven Engineering (LDE) approach, the four DIME products were chosen to give an impression of the power of our LDE-generated IMEs. In fact, Equinocs, Springer Nature's future editorial system for proceedings, is also being fully automatically generated and then deployed at their Dordrecht site using a deployment pipeline generated with Rig, one of the IMEs presented. Our technology is open source and the products presented are currently in use.Comment: 43 pages, 30 figure

    Aggressive aggregation

    Get PDF
    Among the first steps in a compilation pipeline is the construction of an Intermediate Representation (IR), an in-memory representation of the input program. Any attempt to program optimisation, both in terms of size and running time, has to operate on this structure. There may be one or multiple such IRs, however, most compilers use some form of a Control Flow Graph (CFG) internally. This representation clearly aims at general-purpose programming languages, for which it is well suited and allows for many classical program optimisations. On the other hand, a growing structural difference between the input program and the chosen IR can lose or obfuscate information that can be crucial for effective optimisation. With today’s rise of a multitude of different programming languages, Domain-Specific Languages (DSLs), and computing platforms, the classical machine-oriented IR is reaching its limits and a broader variety of IRs is needed. This realisation yielded, e.g., Multi-Level Intermediate Representation (MLIR), a compiler framework that facilitates the creation of a wide range of IRs and encourages their reuse among different programming languages and the corresponding compilers. In this modern spirit, this dissertation explores the potential of Algebraic Decision Diagrams (ADDs) as an IR for (domain-specific) program optimisation. The data structure remains the state of the art for Boolean function representation for more than thirty years and is well-known for its optimality in size and depth, i.e. running time. As such, it is ideally suited to represent the corresponding classes of programs in the role of an IR. We will discuss its application in a variety of different program domains, ranging from DSLs to machine-learned programs and even to general-purpose programming languages. Two representatives for DSLs, a graphical and a textual one, prove the adequacy of ADDs for the program optimisation of modelled decision services. The resulting DSLs facilitate experimentation with ADDs and provide valuable insight into their potential and limitations: input programs can be aggregated in a radical fashion, at the risk of the occasional exponential growth. With the aggregation of large Random Forests into a single aggregated ADD, we bring this potential to a program domain of practical relevance. The results are impressive: both running time and size of the Random Forest program are reduced by multiple orders of magnitude. It turns out that this ADD-based aggregation can be generalised, even to generaliii purpose programming languages. The resulting method achieves impressive speedups for a seemingly optimal program: the iterative Fibonacci implementation. Altogether, ADDs facilitate effective program optimisation where the input programs allow for a natural transformation to the data structure. In these cases, they have proven to be an extremely powerful tool for the optimisation of a program’s running time and, in some cases, of its size. The exploration of their potential as an IR has only started and deserves attention in future research

    Declassification: transforming java programs to remove intermediate classes

    Get PDF
    Computer applications are increasingly being written in object-oriented languages like Java and C++ Object-onented programming encourages the use of small methods and classes. However, this style of programming introduces much overhead as each method call results in a dynamic dispatch and each field access becomes a pointer dereference to the heap allocated object. Many of the classes in these programs are included to provide structure rather than to act as reusable code, and can therefore be regarded as intermediate. We have therefore developed an optimisation technique, called declassification, which will transform Java programs into equivalent programs from which these intermediate classes have been removed. The optimisation technique developed involves two phases, analysis and transformation. The analysis involves the identification of intermediate classes for removal. A suitable class is defined to be a class which is used exactly once within a program. Such classes are identified by this analysis The subsequent transformation involves eliminating these intermediate classes from the program. This involves inlinmg the fields and methods of each intermediate class within the enclosing class which uses it. In theory, declassification reduces the number of classes which are instantiated and used in a program during its execution. This should reduce the overhead of object creation and maintenance as child objects are no longer created, and it should also reduce the number of field accesses and dynamic dispatches required by a program to execute. An important feature of the declassification technique, as opposed to other similar techniques, is that it guarantees there will be no increase in code size. An empirical study was conducted on a number of reasonable-sized Java programs and it was found that very few suitable classes were identified for miming. The results showed that the declassification technique had a small influence on the memory consumption and a negligible influence on the run-time performance of these programs. It is therefore concluded that the declassification technique was not successful in optimizing the test programs but further extensions to this technique combined with an intrinsically object-onented set of test programs could greatly improve its success
    corecore