208 research outputs found

    Fifty years of Hoare's Logic

    Get PDF
    We present a history of Hoare's logic.Comment: 79 pages. To appear in Formal Aspects of Computin

    Another incompleteness result for Hoare's logic

    Get PDF
    It is known (Bergstra and Tucker (1982) J. Comput. System Sci. 25, 217) that if the Hoare rules are complete for a first-order structure %plane1D;49C;, then the set of partial correctness assertions true over %plane1D;49C; is recursive in the first-order theory of %plane1D;49C;. We show that the converse is not true. Namely, there is a first-order structure %plane1D;49E; such that the set of partial correctness assertions true over %plane1D;49E; is recursive in the theory of %plane1D;49E;, but the Hoare rules are not complete for %plane1D;49E;

    Some general incompleteness results for partial correctness logics

    Get PDF
    AbstractIt is known that incompleteness of Hoare's logic relative to certain data type specifications can occur due to the ability of partial correctness assertions to code unsolvable problems; cf. AndrĂ©ka, NĂ©meti, and Sain (1979, Lecture Notes in Computer Science Vol. 74, pp. 208–218, Springer-Verlag, New York/Berlin) and Bergstra and Tucker (1982, Theoret. Comput. Sci. 17, 303–315). We improve what we think are the main known theorems of this kind, showing that they depend only on very weak assumptions on the data type specification (ensuring the ability to simulate arbitrarily long finite initial segments of the natural numbers with successor), and pointing out that the recursion theoretic strength of the obtained results can be increased

    Program Analysis in A Combined Abstract Domain

    Get PDF
    Automated verification of heap-manipulating programs is a challenging task due to the complexity of aliasing and mutability of data structures used in these programs. The properties of a number of important data structures do not only relate to one domain, but to combined multiple domains, such as sorted list, priority queues, height-balanced trees and so on. The safety and sometimes efficiency of programs do rely on the properties of those data structures. This thesis focuses on developing a verification system for both functional correctness and memory safety of such programs which involve heap-based data structures. Two automated inference mechanisms are presented for heap-manipulating programs in this thesis. Firstly, an abstract interpretation based approach is proposed to synthesise program invariants in a combined pure and shape domain. Newly designed abstraction, join and widening operators have been defined for the combined domain. Furthermore, a compositional analysis approach is described to discover both pre-/post-conditions of programs with a bi-abduction technique in the combined domain. As results of my thesis, both inference approaches have been implemented and the obtained results validate the feasibility and precision of proposed approaches. The outcomes of the thesis confirm that it is possible and practical to analyse heap-manipulating programs automatically and precisely by using abstract interpretation in a sophisticated combined domain

    Ten years of Hoare s logic, a survey, part I

    Get PDF

    Verification of Pointer-Based Programs with Partial Information

    Get PDF
    The proliferation of software across all aspects of people's life means that software failure can bring catastrophic result. It is therefore highly desirable to be able to develop software that is verified to meet its expected specification. This has also been identified as a key objective in one of the UK Grand Challenges (GC6) (Jones et al., 2006; Woodcock, 2006). However, many difficult problems still remain in achieving this objective, partially due to the wide use of (recursive) shared mutable data structures which are hard to keep track of statically in a precise and concise way. This thesis aims at building a verification system for both memory safety and functional correctness of programs manipulating pointer-based data structures, which can deal with two scenarios where only partial information about the program is available. For instance the verifier may be supplied with only partial program specification, or with full specification but only part of the program code. For the first scenario, previous state-of-the-art works (Nguyen et al., 2007; Chin et al., 2007; Nguyen and Chin, 2008; Chin et al, 2010) generally require users to provide full specifications for each method of the program to be verified. Their approach seeks much intellectual effort from users, and meanwhile users are liable to make mistakes in writing such specifications. This thesis proposes a new approach to program verification that allows users to provide only partial specification to methods. Our approach will then refine the given annotation into a more complete specification by discovering missing constraints. The discovered constraints may involve both numerical and multiset properties that could be later confirmed or revised by users. Meanwhile, we further augment our approach by requiring only partial specification to be given for primary methods of a program. Specifications for loops and auxiliary methods can then be systematically discovered by our augmented mechanism, with the help of information propagated from the primary methods. This work is aimed at verifying beyond shape properties, with the eventual goal of analysing both memory safety and functional properties for pointer-based data structures. Initial experiments have confirmed that we can automatically refine partial specifications with non-trivial constraints, thus making it easier for users to handle specifications with richer properties. For the second scenario, many programs contain invocations to unknown components and hence only part of the program code is available to the verifier. As previous works generally require the whole of program code be present, we target at the verification of memory safety and functional correctness of programs manipulating pointer-based data structures, where the program code is only partially available due to invocations to unknown components. Provided with a Hoare-style specification ({Pre} prog {Post}) where program (prog) contains calls to some unknown procedure (unknown), we infer a specification (mspecu) for the unknown part (unknown) from the calling contexts, such that the problem of verifying program (prog) can be safely reduced to the problem of proving that the unknown procedure (unknown) (once its code is available) meets the derived specification (mspecu). The expected specification (mspecu) is automatically calculated using an abduction-based shape analysis specifically designed for a combined abstract domain. We have implemented a system to validate the viability of our approach, with encouraging experimental results

    The use of proof plans in tactic synthesis

    Get PDF
    We undertake a programme of tactic synthesis. We first formalize the notion of a tactic as a rewrite rule, then give a correctness criterion for this by means of a reflection mechanism in the constructive type theory OYSTER. We further formalize the notion of a tactic specification, given as a synthesis goal and a decidability goal. We use a proof planner. CIAM. to guide the search for inductive proofs of these, and are able to successfully synthesize several tactics in this fashion. This involves two extensions to existing methods: context-sensitive rewriting and higher-order wave rules. Further, we show that from a proof of the decidability goal one may compile to a Prolog program a pseudo- tactic which may be run to efficiently simulate the input/output behaviour of the synthetic tacti

    Test generation for high coverage with abstraction refinement and coarsening (ARC)

    Get PDF
    Testing is the main approach used in the software industry to expose failures. Producing thorough test suites is an expensive and error prone task that can greatly benefit from automation. Two challenging problems in test automation are generating test input and evaluating the adequacy of test suites: the first amounts to producing a set of test cases that accurately represent the software behavior, the second requires defining appropriate metrics to evaluate the thoroughness of the testing activities. Structural testing addresses these problems by measuring the amount of code elements that are executed by a test suite. The code elements that are not covered by any execution are natural candidates for generating further test cases, and the measured coverage rate can be used to estimate the thoroughness of the test suite. Several empirical studies show that test suites achieving high coverage rates exhibit a high failure detection ability. However, producing highly covering test suites automatically is hard as certain code elements are executed only under complex conditions while other might be not reachable at all. In this thesis we propose Abstraction Refinement and Coarsening (ARC), a goal oriented technique that combines static and dynamic software analysis to automatically generate test suites with high code coverage. At the core of our approach there is an abstract program model that enables the synergistic application of the different analysis components. In ARC we integrate Dynamic Symbolic Execution (DSE) and abstraction refinement to precisely direct test generation towards the coverage goals and detect infeasible elements. ARC includes a novel coarsening algorithm for improved scalability. We implemented ARC-B, a prototype tool that analyses C programs and produces test suites that achieve high branch coverage. Our experiments show that the approach effectively exploits the synergy between symbolic testing and reachability analysis outperforming state of the art test generation approaches. We evaluated ARC-B on industry relevant software, and exposed previously unknown failures in a safety-critical software component

    Verifying temporal properties of systems with applications to petri nets

    Get PDF
    This thesis provides a powerful general-purpose proof technique for the verification of systems, whether finite or infinite. It extends the idea of finite local model-checking, which was introduced by Stirling and Walker: rather than traversing the entire state space of a model, as is done for model-checking in the sense of Emerson, Clarke et al. (checking whether a (finite) model satisfies a formula), local model-checking asks whether a particular state satisfies a formula, and only explores the nearby states far enough to answer that question. The technique used was a tableau method, constructing a tableau according to the formula and the local structure of the model. This tableau technique is here generalized to the infinite case by considering sets of states, rather than single states; because the logic used, the propositional modal mu-calculus, separates simple modal and boolean connectives from powerful fix-point operators (which make the logic more expressive than many other temporal logics), it is possible to give a relatively straightforward set of rules for constructing a tableau. Much of the subtlety is removed from the tableau itself, and put into a relation on the state space defined by the tableau-the success of the tableau then depends on the well-foundedness of this relation. This development occupies the second and third chapters: the second considers the modal mu-calculus, and explains its power, while the third develops the tableau technique itself The generalized tableau technique is exhibited on Petri nets, and various standard notions from net theory are shown to play a part in the use of the technique on nets-in particular, the invariant calculus has a major role. The requirement for a finite presentation of tableaux for infinite systems raises the question of the expressive power of the mu-calculus. This is studied in some detail, and it is shown that on reasonably powerful models of computation, such as Petri nets, the mu-calculus can express properties that are not merely undecidable, but not even arithmetical. The concluding chapter discusses some of the many questions still to be answered, such as the incorporation of formal reasoning within the tableau system, and the power required of such reasoning

    Heap Abstractions for Static Analysis

    Full text link
    Heap data is potentially unbounded and seemingly arbitrary. As a consequence, unlike stack and static memory, heap memory cannot be abstracted directly in terms of a fixed set of source variable names appearing in the program being analysed. This makes it an interesting topic of study and there is an abundance of literature employing heap abstractions. Although most studies have addressed similar concerns, their formulations and formalisms often seem dissimilar and some times even unrelated. Thus, the insights gained in one description of heap abstraction may not directly carry over to some other description. This survey is a result of our quest for a unifying theme in the existing descriptions of heap abstractions. In particular, our interest lies in the abstractions and not in the algorithms that construct them. In our search of a unified theme, we view a heap abstraction as consisting of two features: a heap model to represent the heap memory and a summarization technique for bounding the heap representation. We classify the models as storeless, store based, and hybrid. We describe various summarization techniques based on k-limiting, allocation sites, patterns, variables, other generic instrumentation predicates, and higher-order logics. This approach allows us to compare the insights of a large number of seemingly dissimilar heap abstractions and also paves way for creating new abstractions by mix-and-match of models and summarization techniques.Comment: 49 pages, 20 figure
    • 

    corecore