22 research outputs found

    Automatic Abstraction in SMT-Based Unbounded Software Model Checking

    Full text link
    Software model checkers based on under-approximations and SMT solvers are very successful at verifying safety (i.e. reachability) properties. They combine two key ideas -- (a) "concreteness": a counterexample in an under-approximation is a counterexample in the original program as well, and (b) "generalization": a proof of safety of an under-approximation, produced by an SMT solver, are generalizable to proofs of safety of the original program. In this paper, we present a combination of "automatic abstraction" with the under-approximation-driven framework. We explore two iterative approaches for obtaining and refining abstractions -- "proof based" and "counterexample based" -- and show how they can be combined into a unified algorithm. To the best of our knowledge, this is the first application of Proof-Based Abstraction, primarily used to verify hardware, to Software Verification. We have implemented a prototype of the framework using Z3, and evaluate it on many benchmarks from the Software Verification Competition. We show experimentally that our combination is quite effective on hard instances.Comment: Extended version of a paper in the proceedings of CAV 201

    Lagrangian analysis led design of a shock recovery plate impact experiment

    Get PDF
    Shock recovery techniques, such as the flyer-plate impact test, are used to examine a material that has been subjected to a single well-defined shock, followed by a single release wave. One of the key requirements of this type of technique is that the process should be such that any change found in the sample after recovery, can only be attributed to the shock process alone. Therefore, the principal problem for a test specimen-fixture assembly is that it is designed such that the loading history of the recovered specimen is known. This has motivated this research through the analysis led design of a shock recovery experiment. The choice of Lagrangian Finite Element Analysis for this design work was driven by the method's ability to accurately track history variables (for plastic deformation) and treat contact interactions which are crucial in this problem. Starting from an initial configuration, LS-Dyna has been used to analyse in detail the resulting wave propagation to ensure the generation of a uniaxial strain state in the specimen through Lagrangian distance-time diagrams. These iso-maps enabled the identification of potential shortcomings with the initial design, in terms of the transmission of contact and the influence of radial release waves at the different boundaries between specimen and supporting fixture rings. The benefits of using Lagrangian Finite Element Analysis for this design work are its ability to track history variables (for plastic deformation) and contact treatment. Based on these findings, a new configuration was developed, which consists of an array of concentric rings that support the specimen. During shock formation in the specimen, these rings progressively transfer the loading in the impact direction and radially away from the specimen, acting as momentum traps and preventing unwanted release waves from affecting the strain state experienced by the specimen. Comparing distance time diagrams between original and proposed configurations, a design sensitivity analysis was performed, where the new geometry resulted in a decrease of both the residual velocity (-38%) and radial displacement (-27%) of the target when compared to the original setup

    Widening Polyhedra with Landmarks: 4th Asian Symposium, APLAS 2006, Sydney, Australia, November 8-10, 2006. Proceedings

    Get PDF
    The abstract domain of polyhedra is sufficiently expressive to be deployed in verification. One consequence of the richness of this domain is that long, possibly infinite, sequences of polyhedra can arise in the analysis of loops. Widening and narrowing have been proposed to infer a single polyhedron that summarises such a sequence of polyhedra. Motivated by precision losses encountered in verification, we explain how the classic widening/narrowing approach can be refined by an improved extrapolation strategy. The insight is to record inequalities that are thus far found to be unsatisfiable in the analysis of a loop. These so-called landmarks hint at the amount of widening necessary to reach stability. This extrapolation strategy, which refines widening with thresholds, can infer post-fixpoints that are precise enough not to require narrowing. Unlike previous techniques, our approach interacts well with other domains, is fully automatic, conceptually simple and precise on complex loops

    Shape Refinement through Explicit Heap Analysis

    No full text
    Shape analysis is a promising technique to prove program properties about recursive data structures. The challenge is to automatically determine the data-structure type, and to supply the shape analysis with the necessary information about the data structure. We present a stepwise approach to the selection of instrumentation predicates for a TVLA-based shape analysis, which takes us a step closer towards the fully automatic verification of data structures. The approach uses two techniques to guide the refinement of shape abstractions: (1) during program exploration, an explicit heap analysis collects sample instances of the heap structures, which are used to identify the data structures that are manipulated by the program; and (2) during abstraction refinement along an infeasible error path, we consider different possible heap abstractions and choose the coarsest one that eliminates the infeasible path. We have implemented this combined approach for automatic shape refinement as an extension of the software model checker BLAST. Example programs from a data-structure library that manipulate doubly-linked lists and trees were successfully verified by our tool

    Comparing Cost Functions in Resource Analysis

    No full text
    Abstract. Cost functions provide information about the amount of resources required to execute a program in terms of the sizes of input arguments. They can provide an upper-bound, a lower-bound, or the average-case cost. Motivated by the existence of a number of automatic cost analyzers which produce cost functions, we propose an approach for automatically proving that a cost function is smaller than another one. In all applications of resource analysis, such as resource-usage verification, program synthesis and optimization, etc., it is essential to compare cost functions. This allows choosing an implementation with smaller cost or guaranteeing that the given resource-usage bounds are preserved. Unfortunately, automatically generated cost functions for realistic programs tend to be rather intricate, defined by multiple cases, involving non-linear subexpressions (e.g., exponential, polynomial and logarithmic) and they can contain multiple variables, possibly related by means of constraints. Thus, comparing cost functions is far from trivial. Our approach first syntactically transforms functions into simpler forms and then applies a number of sufficient conditions which guarantee that a set of expressions is smaller than another expression. Our preliminary implementation in the COSTA system indicates that the approach can be useful in practice.
    corecore