2,072 research outputs found

    Modular Construction of Shape-Numeric Analyzers

    Get PDF
    The aim of static analysis is to infer invariants about programs that are precise enough to establish semantic properties, such as the absence of run-time errors. Broadly speaking, there are two major branches of static analysis for imperative programs. Pointer and shape analyses focus on inferring properties of pointers, dynamically-allocated memory, and recursive data structures, while numeric analyses seek to derive invariants on numeric values. Although simultaneous inference of shape-numeric invariants is often needed, this case is especially challenging and is not particularly well explored. Notably, simultaneous shape-numeric inference raises complex issues in the design of the static analyzer itself. In this paper, we study the construction of such shape-numeric, static analyzers. We set up an abstract interpretation framework that allows us to reason about simultaneous shape-numeric properties by combining shape and numeric abstractions into a modular, expressive abstract domain. Such a modular structure is highly desirable to make its formalization and implementation easier to do and get correct. To achieve this, we choose a concrete semantics that can be abstracted step-by-step, while preserving a high level of expressiveness. The structure of abstract operations (i.e., transfer, join, and comparison) follows the structure of this semantics. The advantage of this construction is to divide the analyzer in modules and functors that implement abstractions of distinct features.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    Transfer Function Synthesis without Quantifier Elimination

    Get PDF
    Traditionally, transfer functions have been designed manually for each operation in a program, instruction by instruction. In such a setting, a transfer function describes the semantics of a single instruction, detailing how a given abstract input state is mapped to an abstract output state. The net effect of a sequence of instructions, a basic block, can then be calculated by composing the transfer functions of the constituent instructions. However, precision can be improved by applying a single transfer function that captures the semantics of the block as a whole. Since blocks are program-dependent, this approach necessitates automation. There has thus been growing interest in computing transfer functions automatically, most notably using techniques based on quantifier elimination. Although conceptually elegant, quantifier elimination inevitably induces a computational bottleneck, which limits the applicability of these methods to small blocks. This paper contributes a method for calculating transfer functions that finesses quantifier elimination altogether, and can thus be seen as a response to this problem. The practicality of the method is demonstrated by generating transfer functions for input and output states that are described by linear template constraints, which include intervals and octagons.Comment: 37 pages, extended version of ESOP 2011 pape

    Panel discussion: Proposals for improving OCL

    Get PDF
    During the panel session at the OCL workshop, the OCL community discussed, stimulated by short presentations by OCL experts, potential future extensions and improvements of the OCL. As such, this panel discussion continued the discussion that started at the OCL meeting in Aachen in 2013 and on which we reported in the proceedings of the last year's OCL workshop. This collaborative paper, to which each OCL expert contributed one section, summarises the panel discussion as well as describes the suggestions for further improvements in more detail.Peer ReviewedPostprint (published version

    The Hardness of Finding Linear Ranking Functions for Lasso Programs

    Full text link
    Finding whether a linear-constraint loop has a linear ranking function is an important key to understanding the loop behavior, proving its termination and establishing iteration bounds. If no preconditions are provided, the decision problem is known to be in coNP when variables range over the integers and in PTIME for the rational numbers, or real numbers. Here we show that deciding whether a linear-constraint loop with a precondition, specifically with partially-specified input, has a linear ranking function is EXPSPACE-hard over the integers, and PSPACE-hard over the rationals. The precise complexity of these decision problems is yet unknown. The EXPSPACE lower bound is derived from the reachability problem for Petri nets (equivalently, Vector Addition Systems), and possibly indicates an even stronger lower bound (subject to open problems in VAS theory). The lower bound for the rationals follows from a novel simulation of Boolean programs. Lower bounds are also given for the problem of deciding if a linear ranking-function supported by a particular form of inductive invariant exists. For loops over integers, the problem is PSPACE-hard for convex polyhedral invariants and EXPSPACE-hard for downward-closed sets of natural numbers as invariants.Comment: In Proceedings GandALF 2014, arXiv:1408.5560. I thank the organizers of the Dagstuhl Seminar 14141, "Reachability Problems for Infinite-State Systems", for the opportunity to present an early draft of this wor

    Evaluating Model Testing and Model Checking for Finding Requirements Violations in Simulink Models

    Get PDF
    Matlab/Simulink is a development and simulation language that is widely used by the Cyber-Physical System (CPS) industry to model dynamical systems. There are two mainstream approaches to verify CPS Simulink models: model testing that attempts to identify failures in models by executing them for a number of sampled test inputs, and model checking that attempts to exhaustively check the correctness of models against some given formal properties. In this paper, we present an industrial Simulink model benchmark, provide a categorization of different model types in the benchmark, describe the recurring logical patterns in the model requirements, and discuss the results of applying model checking and model testing approaches to identify requirements violations in the benchmarked models. Based on the results, we discuss the strengths and weaknesses of model testing and model checking. Our results further suggest that model checking and model testing are complementary and by combining them, we can significantly enhance the capabilities of each of these approaches individually. We conclude by providing guidelines as to how the two approaches can be best applied together.Comment: 10 pages + 2 page reference

    Modeling geometric rules in object based models:an XML / GML approach

    Get PDF
    Most object-based approaches to Geographical Information Systems (GIS) have concentrated on the representation of geometric properties of objects in terms of fixed geometry. In our road traffic marking application domain we have a requirement to represent the static locations of the road markings but also enforce the associated regulations, which are typically geometric in nature. For example a give way line of a pedestrian crossing in the UK must be within 1100-3000 mm of the edge of the crossing pattern. In previous studies of the application of spatial rules (often called 'business logic') in GIS emphasis has been placed on the representation of topological constraints and data integrity checks. There is very little GIS literature that describes models for geometric rules, although there are some examples in the Computer Aided Design (CAD) literature. This paper introduces some of the ideas from so called variational CAD models to the GIS application domain, and extends these using a Geography Markup Language (GML) based representation. In our application we have an additional requirement; the geometric rules are often changed and vary from country to country so should be represented in a flexible manner. In this paper we describe an elegant solution to the representation of geometric rules, such as requiring lines to be offset from other objects. The method uses a feature-property model embraced in GML 3.1 and extends the possible relationships in feature collections to permit the application of parameterized geometric constraints to sub features. We show the parametric rule model we have developed and discuss the advantage of using simple parametric expressions in the rule base. We discuss the possibilities and limitations of our approach and relate our data model to GML 3.1. © 2006 Springer-Verlag Berlin Heidelberg
    • …
    corecore