38 research outputs found

    On the organisation of program verification competitions

    Get PDF
    In this paper, we discuss the challenges that have to be addressed when organising program verification competitions. Our focus is on competitions for verification systems where the participants both formalise an informally stated requirement and (typically) provide some guidance for the tool to show it. The paper draws its insights from our experiences with organising a program verification competition at FoVeOOS 2011. We discuss in particular the following aspects: challenge selection, on-site versus online organisation, team composition and judging. We conclude with a list of recommendations for future competition organisers

    Synthesizing Certified Code

    No full text
    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. Given a high-level specification, our approach simultaneously generates code and all annotations required to certify the generated code. We describe a certification extension of AutoBayes, a synthesis tool for automatically generating data analysis programs. Based on built-in domain knowledge, proof annotations are added and used to generate proof obligations that are discharged by the automated theorem prover E-SETHEO. We demonstrate our approach by certifying operator- and memory-safety on a data-classification program. For this program, our approach was faster and more precise than PolySpace, a commercial static analysis tool

    Die Integration von Verifikation und Test in Übersetzungssysteme

    Get PDF
    In dieser Arbeit wird die Architektur für einen Compiler vorgestellt, der die Korrektheit der übersetzten Quellen als Teil des Übersetzungsvorgangs überprüfen kann. Dabei soll es möglich sein, verschiedene Methoden, wie etwa formaler Test und formaler Beweis, einzusetzen, um die Korrektheit nachzuweisen. Ein vollautomatischer Nachweis ist sehr aufwendig und häufig auch gar nicht möglich. Es ist also nicht praktikabel, aus Spezifikation und Programm die Korrektheit automatisch abzuleiten. Wir erweitern daher die Sprache um Korrektheitsnachweise (justifications), die der Benutzer in den Quelltext einfügen muß ("literate justification"). Je nach gewählter Methode muß der Benutzer den Korrektheitsnachweis mehr oder weniger genau ausführen. Durch die Einführung der Korrektheitsnachweise muß der Übersetzer Beweise nur noch überprüfen anstatt sie automatisch abzuleiten. Die Überprüfung der Korrektheitsnachweise kann in den Übersetzer integriert werden oder an ein externes Werkzeug delegiert werden. Ein externes Werkzeug erlaubt die Einbindung bereits existierender Werkzeuge, aber auch eine Neuentwicklung eigener Werkzeuge ist möglich. Wir zeigen am Beispiel eines taktischen Theorembeweisers, daß eine Eigenentwicklung nicht unbedingt aufwendiger ist als die Anpassung eines vorhandenen Werkzeugs. Um Tests während der Übersetzung durchführen zu können, muß ein Interpreter zur Verfügung stehen. Die Ausführung ungetesteten Codes birgt allerdings auch Sicherheitsprobleme. Wir diskutieren verschiedene Möglichkeiten, mit diesem Problem umzugehen. Der Korrektheit einer Übersetzungseinheit entspricht in der Semantik die Konsistenz einer algebraischen Spezifikation. Wir betrachten zwei Beweismethoden: zum einen durch Konstruktion eines Modells und zum andern durch Nachweis einer korrektheitserhaltenden Relation. Die Beweisverpflichtungen ergeben sich zunächst aus der Beweismethode, außerdem werden Beweisverpflichtungen eingeführt, um die Korrektheit von zusammengesetzten (modularen) Programmen zuzusichern. Die in dieser Arbeit beschriebene Architektur ist prototypisch implementiert worden. Dazu wurde das Opal-System um Elemente zur Spezifikation und zur Beschreibung von Korektheitsnachweisen erweitert. In der Arbeit werden einige kurze Beispiele vorgeführt. Der Opal/J-Prototyp ist seit Version 2.3e Teil der Opal-Distribution.In this thesis we present a compiler architecture that enables the compiler to check the correctness of the source code as part of the compilation process. It allows to perform these correctness checks with different methods, in particular formal testing and formal proof. A fully automated check is very expensive and often impossible. Hence, it is not feasible to check correctness automatically with the help of specification and implementation. We extend the programming language by (correctness) justifications that the user must insert into the source code ("literate justification"). Depending on the chosen justification method the user must work out the justification in more or less detail. The introduction of justifications changes the compiler's task from deriving a correctness proof by itself to checking a correctness proof provided by the user. The correctness check for justifications can be integrated into the compiler or delegated to an external tool. An external tool allows the integration of existing tools but the development of specialized tools is also possible. The example development of a specialized tactical theorem-prover shows that the development of a specialized tool is not necessarily more expensive than the adaptation of an existing tool. For test execution during the compilation an interpreter must be available. The execution of untested code causes security risks. We discuss different possibilities to deal with this problem. The correctnessof a compilation unit corresponds to the consistency of an algebraic specification. We study two proof methods: either by construction of a model or by establishing a correctness-preserving relation. The proofo bligations result from the proof method, in addition proof obligations arei ntroduced to ensure the correctness of modular programs. The compiler architecture described in this thesis has been prototypically implemented. The Opal system has been extended with language elements to denote specifications and (correctness) justifications. The thesis presents some short examples. The Opal/J prototype is part of the Opal distribution since version 2.3e

    An ontology for software component matching

    Get PDF
    Matching is a central activity in the discovery and assembly of reusable software components. We investigate how ontology technologies can be utilised to support software component development. We use description logics, which underlie Semantic Web ontology languages such as OWL, to develop an ontology for matching requested and provided components. A link between modal logic and description logics will prove invaluable for the provision of reasoning support for component behaviour

    VerifyThis 2012 - A program verification competition

    Get PDF
    VerifyThis 2012 was a two-day verification competition taking place as part of the International Symposium on Formal Methods (FM 2012) on August 30-31, 2012 in Paris, France. It was the second installment in the VerifyThis series. After the competition, an open call solicited contributions related to the VerifyThis 2012 challenges and overall goals. As a result, seven papers were submitted and, after review and revision, included in this special issue.\ud In this introduction to the special issue, we provide an overview of the VerifyThis competition series, an account of related activities in the area, and an overview of solutions submitted to the organizers both during and after the 2012 competition. We conclude with a summary of results and some remarks concerning future installments of VerifyThis

    Automated Requirements Formalisation for Agile MDE

    Get PDF
    Model-driven engineering (MDE) of software systems from precise specifications has become established as an important approach for rigorous software development. However, the use of MDE requires specialised skills and tools, which has limited its adoption.In this paper we describe techniques for automating the derivation of software specifications from requirements statements, in order to reduce the effort required in creating MDE specifications, and hence to improve the usability and agility of MDE. Natural language processing (NLP) and Machine learning (ML) are used to recognise the required data and behaviour elements of systems from textual and graphical documents, and formal specification models of the systems are created. These specifications can then be used as the basis of manual software development, or as the starting point for automated software production using MDE

    LEX : a case study in development and validation of formal specifications

    Get PDF
    The paper describes an experiment in the combined use of various tools for the development and validation of formal specifications. The first tool consists of a very abstract, (non-executable) axiomatic specification language. The second tool consists of an (executable) constructive specification language. Finally, the third tool is a verification system. The first two tools were used to develop two specifications for the same case study, viz. a generic scanner similar to the tool LEX present in UNIX. Reflecting the nature of the tools the first specification is abstract and non-executable, whereas the second specification is less abstract but executable. Thereupon the verification system was used to formally prove that the second specification is consistent with the first in that it describes the same problem. During this proof it appeared that both specifications contained conceptual errors (adequacy errors). It is argued that the combined use of tools similar to those employed in the experiment may substantially increase the quality of the software developed

    Integrating processes in temporal logic

    Get PDF
    In this paper we propose a technique to integrate process models in classical structures for quantified temporal (modal) logic. The idea is that in a temporal logic processes are ordinary syntactical objects with a specific semantical representation. So we want to achieve a `temporal logics of processes\u27 to adequately describe aspects of systems dealing with data structures, reactive and time-critical behavior, environmental influences, and their interaction in a single frame. Thus the structural information of processes can be captured and exploited to guide proofs. As an instance of this scheme we present a quantified, metric, linear temporal logic containing processes and conjunctions of processes explicitly. Like a predicate a process can be regarded as a special kind of atomic formula with its own intension, a family of sets collecting the observable behavior as `runs\u27. A run is comparable with a Hoare-traces or a timed observational sequence it is a sequence of sequences of values taken from a set of objects. Each single value can be regarded as a snapshot of an observable feature at a moment in time, e.g. a value transmitted through a channel. Such a set has to respects the structure of the underlying temporal logic, but not one to one, we do not require that for a path in the time structure there is exactly one possible run. Since each run has a certain length, the view of a run is in particular associated with a time interval. The difference between moments and intervals of time is expressed by several kinds of modal operators each of them with restrictions in the shape of annotated equations and predicates to determined the relevant time slices. We describe syntax and semantic of this logic especially with a focus on the process part. Finally we sketch a calculus and give some examples
    corecore