37 research outputs found

    Explaining Counterexamples with Giant-Step Assertion Checking

    Get PDF
    International audienceIdentifying the cause of a proof failure during deductive verification of programs is hard: it may be due to an incorrectness in the program, an incompleteness in the program annotations, or an incompleteness of the prover. The changes needed to resolve a proof failure depend on its category, but the prover cannot provide any help on the categorisation. When using an SMT solver to discharge a proof obligation, that solver can propose a model from a failed attempt, from which a possible counterexample can be derived. But the counterexample may be invalid, in which case it may add more confusion than help. To check the validity of a counterexample and to categorise the proof failure, we propose the comparison between the run-time assertion-checking (RAC) executions under two different semantics, using the counterexample as an oracle. The first RAC execution follows the normal program semantics, and a violation of a program annotation indicates an incorrectness in the program. The second RAC execution follows a novel "giant-step" semantics that does not execute loops nor function calls but instead retrieves return values and values of modified variables from the oracle. A violation of the program annotations only observed under giant-step execution characterises an incompleteness of the program annotations. We implemented this approach in the Why3 platform for deductive program verification and evaluated it using examples from prior literature

    Static Verification of Cloud Applications with Why3

    Get PDF
    Nowadays large-scale distributed applications rely on replication in order to improve their services. Having data replicated in multiple datacenters increases availability, but it might lead to concurrent updates that violate data integrity. A possible approach to solve this issue is to use strong consistency in the application because this way there is a total order of operations in every replica. However, that would make the application abdicate of its availability. An alternative would be to use weak consistency to make the application more available, but that could break data integrity. To resolve this issue many of these applications use a combination of weak and strong consistency models, such that synchronization is only introduced in the execution of operations that can break data integrity. To build applications that use multiple consistency models, developers have the difficult task of finding the right balance between two conflicting goals: minimizing synchronization while preserving data integrity. To achieve this balance developers have to reason about the concurrent effects of each operation, which is a non-trivial task when it comes to large and complex applications. In this document we propose an approach consisting of a static analysis tool that helps developers find a balance between strong and weak consistency in applications that operate over weakly consistent databases. The verification process is based on a recently defined proof rule that was proven to be sound. The proposed tool uses Why3 as an intermediate framework that communicates with external provers, to analyse the correctness of the application specification. Our contributions also include a predicate transformer and a library of verified data types that can be used to resolve commutativity issues in applications. The predicate transformer can be used to lighten the specification effort

    Логика для суждений об ошибках в циклах над последовательностями данных (IFIL)

    Get PDF
    Classic deductive verification is not focused on reasoning about program incorrectness. Reasoning about program incorrectness using formal methods is an important problem nowadays. Special logics such as Incorrectness Logic, Adversarial Logic, Local Completeness Logic, Exact Separation Logic and Outcome Logic have recently been proposed to address it. However, these logics have two disadvantages. One is that they are based on under-approximation approaches, while classic deductive verification is based on the over-approximation approach. One the other hand, the use of the classic approach requires defining loop invariants in a general case. The second disadvantage is that the use of generalized inference rules from these logics results in having to prove too complex formulas in simple cases. Our contribution is a new logic for solving these problems in the case of loops over data sequences. These loops are referred to as finite iterations. We call the proposed logic the Incorrectness Finite Iteration Logic (IFIL). We avoid defining invariants of finite iterations using a symbolic replacement of these loops with recursive functions. Our logic is based on special inference rules for finite iterations. These rules allow generating formulas with recursive functions corresponding to finite iterations. The validity of these formulas may indicate the presence of bugs in the finite iterations. This logic has been implemented in a new version of the C-lightVer system for deductive verification of C programs.Классическая дедуктивная верификация не ориентирована на доказательство некорректности программ. Доказательство некорректности программ с помощью формальных методов является актуальной задачей в настоящее время. Специальные логики, такие как Incorrectness Logic, Adversarial Logic, Local Completeness Logic, Exact Separation Logic и Outcome Logic, были недавно предложены для решения данной задачи. Но у данных логик имеется два недостатка. Во-первых, в данных логиках используются подходы, основанные на нижней аппроксимации, тогда как в классической дедуктивной верификации используется подход, основанный на верхней аппроксимации. С другой стороны, использование классического подхода требует в общем случае задания инвариантов циклов. Во-вторых, использование правил вывода для программных конструкций в их самом общем виде приводит к необходимости доказательства сложных формул в простых ситуациях. Нашим результатом, представленным в данной статье, является новая логика для решения данных проблем в случае циклов над последовательностями данных. Такая циклы мы называем финитными итерациями. Предложенную логику мы называем логикой для суждений о некорректности финитных итераций (IFIL). Мы избегаем задания инвариантов финитных итераций с помощью символической замены в условиях корректности переменных таких циклов применениями рекурсивных функций. Наша логика основана на специальных правилах вывода для финитных итераций. Эти правила позволяют выводить формулы с применениями рекурсивных функций, соответствующих финитным итерациям. Истинность этих формул может означать наличие ошибок в финитных итерациях. Данная логика была реализована в новой версии программной системы C-lightVer для дедуктивной верификации программ на языке C

    Test generation for high coverage with abstraction refinement and coarsening (ARC)

    Get PDF
    Testing is the main approach used in the software industry to expose failures. Producing thorough test suites is an expensive and error prone task that can greatly benefit from automation. Two challenging problems in test automation are generating test input and evaluating the adequacy of test suites: the first amounts to producing a set of test cases that accurately represent the software behavior, the second requires defining appropriate metrics to evaluate the thoroughness of the testing activities. Structural testing addresses these problems by measuring the amount of code elements that are executed by a test suite. The code elements that are not covered by any execution are natural candidates for generating further test cases, and the measured coverage rate can be used to estimate the thoroughness of the test suite. Several empirical studies show that test suites achieving high coverage rates exhibit a high failure detection ability. However, producing highly covering test suites automatically is hard as certain code elements are executed only under complex conditions while other might be not reachable at all. In this thesis we propose Abstraction Refinement and Coarsening (ARC), a goal oriented technique that combines static and dynamic software analysis to automatically generate test suites with high code coverage. At the core of our approach there is an abstract program model that enables the synergistic application of the different analysis components. In ARC we integrate Dynamic Symbolic Execution (DSE) and abstraction refinement to precisely direct test generation towards the coverage goals and detect infeasible elements. ARC includes a novel coarsening algorithm for improved scalability. We implemented ARC-B, a prototype tool that analyses C programs and produces test suites that achieve high branch coverage. Our experiments show that the approach effectively exploits the synergy between symbolic testing and reachability analysis outperforming state of the art test generation approaches. We evaluated ARC-B on industry relevant software, and exposed previously unknown failures in a safety-critical software component

    An executable Theory of Multi-Agent Systems Refinement

    Get PDF
    Complex applications such as incident management, social simulations, manufacturing applications, electronic auctions, e-institutions, and business to business applications are pervasive and important nowadays. Agent-oriented methodology is an advance in abstractionwhich can be used by software developers to naturally model and develop systems for suchapplications. In general, with respect to design methodologies, what it may be important tostress is that control structures should be added at later stages of design, in a natural top-downmanner going from specifications to implementations, by refinement. Too much detail (be itfor the sake of efficiency) in specifications often turns out to be harmful. To paraphrase D.E.Knuth, “Premature optimization is the root of all evil” (quoted in ‘The Unix ProgrammingEnvironment’ by Kernighan and Pine, p. 91).The aim of this thesis is to adapt formal techniques to the agent-oriented methodologyinto an executable theory of refinement. The justification for doing so is to provide correctagent-based software by design. The underlying logical framework of the theory we proposeis based on rewriting logic, thus the theory is executable in the same sense as rewriting logicis. The storyline is as follows. We first motivate and explain constituting elements of agentlanguages chosen to represent both abstract and concrete levels of design. We then proposea definition of refinement between agents written in such languages. This notion of refinement ensures that concrete agents are correct with respect to the abstract ones. The advantageof the definition is that it easily leads to formulating a proof technique for refinement viathe classical notion of simulation. This makes it possible to effectively verify refinement bymodel-checking. Additionally, we propose a weakest precondition calculus as a deductivemethod based on assertions which allow to prove correctness of infinite state agents. Wegeneralise the refinement relation from single agents to multi-agent systems in order to ensure that concrete multi-agent systems refine their abstractions. We see multi-agent systemsas collections of coordinated agents, and we consider coordination artefacts as being basedeither on actions or on normative rules. We integrate these two orthogonal coordinationmechanisms within the same refinement theory extended to a timed framework. Finally, wediscuss implementation aspects.LEI Universiteit LeidenFoundations of Software Technolog

    Static versus Dynamic Verification in Why3, Frama-C and SPARK 2014

    Get PDF
    International audienceWhy3 is an environment for static verification, generic in the sense that it is used as an intermediate tool by different front-ends for the verification of Java, C or Ada programs. Yet, the choices made when designing the specification languages provided by those front-ends differ significantly, in particular with respect to the executability of specifications. We review these differences and the issues that result from these choices. We emphasize the specific feature of ghost code which turns out to be extremely useful for both static and dynamic verification. We also present techniques, combining static and dynamic features, that help users understand why static verification fails

    Automated Formal Analysis of Temporal Properties of Ladder Programs

    Get PDF
    International audienceProgrammable Logic Controllers are industrial digital computers used as automation controllers in manufacturing processes. The Ladder language is a programming language used to develop software for such controllers. In this work, we consider the description of the expected behaviour of a Ladder program under the form of a timing chart, describing a scenario of execution. Our aim is to prove that the given Ladder program conforms to the expected temporal behaviour given by such a timing chart. Our approach amounts to translating the Ladder code, together with the timing chart, into a program for the Why3 environment for deductive program verification. The verification proceeds with the generation of verification conditions: mathematical formulas to be checked valid using automated theorem provers. The ultimate goal is twofold. On the one hand, by obtaining a complete proof, one verifies the conformity of the Ladder code with respect to the timing chart with a high degree of confidence. On the other hand, in the case the proof is not fully completed, one obtains a counterexample, illustrating a possible execution scenario of the Ladder code which does not conform to the timing chart

    Learning Program Specifications from Sample Runs

    Get PDF
    With science fiction of yore being reality recently with self-driving cars, wearable computers and autonomous robots, software reliability is growing increasingly important. A critical pre-requisite to ensure the software that controls such systems is correct is the availability of precise specifications that describe a program\u27s intended behaviors. Generating these specifications manually is a challenging, often unsuccessful, exercise; unfortunately, existing static analysis techniques often produce poor quality specifications that are ineffective in aiding program verification tasks. In this dissertation, we present a recent line of work on automated synthesis of specifications that overcome many of the deficiencies that plague existing specification inference methods. Our main contribution is a formulation of the problem as a sample driven one, in which specifications, represented as terms in a decidable refinement type representation, are discovered from observing a program\u27s sample runs in terms of either program execution paths or input-output values, and automatically verified through the use of expressive refinement type systems. Our approach is realized as a series of inductive synthesis frameworks, which use various logic-based or classification-based learning algorithms to provide sound and precise machine-checked specifications. Experimental results indicate that the learning algorithms are both efficient and effective, capable of automatically producing sophisticated specifications in nontrivial hypothesis domains over a range of complex real-world programs, going well beyond the capabilities of existing solutions

    The Past, Present, and Future(s): Verifying Temporal Software Properties

    Get PDF
    Software systems are increasingly present in every aspect of our society, as their deployment can be witnessed from seemingly trivial applications of light switches, to critical control systems of nuclear facilities. In the context of critical systems, software faults and errors could potentially lead to detrimental consequences, thus more rigorous methodologies beyond the scope of testing need be applied to software systems. Formal verification, the concept of being able to mathematically prove the correctness of an algorithm with respect to a mathematical formal specification, can indeed help us prevent these failures. A popular specification language for these formal specifications is temporal logic, due to its intuitive, yet precise expressions that can be utilized to both specify and verify fundamental properties pertaining to software systems. Temporal logic can express properties pertaining to safety, liveness, termination, non-termination, and more with regards to various systems such as Windows device drivers, kernel APIs, database servers, etc. This dissertation thus presents automated scalable techniques for verifying expressive temporal logic properties of software systems, specifically those beyond the scope of existing techniques. Furthermore, this work considers the temporal sub-logics fair-CTL, CTL*, and CTL*lp, as verifying these more expressive sub-logics has been an outstanding research problem. We begin building our framework by introducing a novel scalable and high-performance CTL verification technique. Our CTL methodology is unique relative to existing techniques in that it facilitates reasoning about more expressive temporal logics. In particular, it allows us to further introduce various methodologies that allow us to verify fair-CTL, CTL*, and CTL*lp. We support the verification of fair-CTL through a reduction to our CTL model checking technique via the use of infinite non-deterministic branching to symbolically partition fair from unfair executions. For CTL∗, we propose a method that uses an internal encoding which facilitates reasoning about the subtle interplay between the nesting of path and state temporal operators that occurs within CTL∗ proofs. A precondition synthesis strategy is then used over a program transformation which trades nondeterminism in the transition relation for nondeterminism explicit in variables predicting future outcomes when necessary. Finally, we propose a linear-past extension to CTL*, that being CTL*lp, in which the past is linear and each moment in time has a unique past. We support this extension through the use of history variables over our CTL∗ technique. We demonstrate the fully automated implementation of our techniques, and report our bench- marks carried out on code fragments from the PostgreSQL database server, Apache web server, Windows OS kernel, as well as smaller programs demonstrating the expressiveness of fair-CTL, CTL*, and CTL*lp specifications. Together, these novel methodologies lead to a new class of fully automated tools capable of proving crucial properties that no tool could previously prove in the infinite-state setting
    corecore