799 research outputs found

    Automatic generation of simplified weakest preconditions for integrity constraint verification

    Get PDF
    Given a constraint cc assumed to hold on a database BB and an update uu to be performed on BB, we address the following question: will cc still hold after uu is performed? When BB is a relational database, we define a confluent terminating rewriting system which, starting from cc and uu, automatically derives a simplified weakest precondition wp(c,u)wp(c,u) such that, whenever BB satisfies wp(c,u)wp(c,u), then the updated database u(B)u(B) will satisfy cc, and moreover wp(c,u)wp(c,u) is simplified in the sense that its computation depends only upon the instances of cc that may be modified by the update. We then extend the definition of a simplified wp(c,u)wp(c,u) to the case of deductive databases; we prove it using fixpoint induction

    Synthesis of OCL Pre-conditions for Graph Transformation Rules

    Get PDF
    Proceedings of: Third International Conference on Model Transformation (ICMT 2010): Theory and Practice of Model Transformation. Málaga, Spain, 28 June-02 July, 2010Graph transformation (GT) is being increasingly used in Model Driven Engineering (MDE) to describe in-place transformations like animations and refactorings. For its practical use, rules are often complemented with OCL application conditions. The advancement of rule post-conditions into pre-conditions is a well-known problem in GT, but current techniques do not consider OCL. In this paper we provide an approach to advance post-conditions with arbitrary OCL expressions into pre-conditions. This presents benefits for the practical use of GT in MDE, as it allows: (i) to automatically derive pre-conditions from the meta-model integrity constraints, ensuring rule correctness, (ii) to derive pre-conditions from graph constraints with OCL expressions and (iii) to check applicability of rule sequences with OCL conditions.Work funded by the Spanish Ministry of Science and Innovation through projects “Design and construction of a Conceptual Modeling Assistant” (TIN2008-00444/TIN - Grupo Consolidado), “METEORIC” (TIN2008-02081),mobility grants JC2009-00015 and PR2009-0019, and the R&D program of the Community of Madrid (S2009/TIC-1650, project “e-Madrid”).Publicad

    An integrated approach to high integrity software verification.

    Get PDF
    Computer software is developed through software engineering. At its most precise, software engineering involves mathematical rigour as formal methods. High integrity software is associated with safety critical and security critical applications, where failure would bring significant costs. The development of high integrity software is subject to stringent standards, prescribing best practises to increase quality. Typically, these standards will strongly encourage or enforce the application of formal methods. The application of formal methods can entail a significant amount of mathematical reasoning. Thus, the development of automated techniques is an active area of research. The trend is to deliver increased automation through two complementary approaches. Firstly, lightweight formal methods are adopted, sacrificing expressive power, breadth of coverage, or both in favour of tractability. Secondly, integrated solutions are sought, exploiting the strengths of different technologies to increase automation. The objective of this thesis is to support the production of high integrity software by automating an aspect of formal methods. To develop tractable techniques we focus on the niche activity of verifying exception freedom. To increase effectiveness, we integrate the complementary technologies of proof planning and program analysis. Our approach is investigated by enhancing the SPARK Approach, as developed by Altran Praxis Limited. Our approach is implemented and evaluated as the SPADEase system. The key contributions of the thesis are summarised below: ‱ Configurable and Sound - Present a configurable and justifiably sound approach to software verification. ‱ Cooperative Integration - Demonstrate that more targeted and effective automation can be achieved through the cooperative integration of distinct technologies. ‱ Proof Discovery - Present proof plans that support the verification of exception freedom. ‱ Invariant Discovery - Present invariant discovery heuristics that support the verification of exception freedom. ‱ Implementation as SPADEase - Implement our approach as SPADEase. ‱ Industrial Evaluation - Evaluate SPADEase against both textbook and industrial subprograms

    Verifying Quantitative Reliability of Programs That Execute on Unreliable Hardware

    Get PDF
    Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and recovery from soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning, and big data analytics) can often naturally tolerate soft errors. In this paper we present Rely, a programming language that enables developers to reason about the quantitative reliability of an application -- namely, the probability that it produces the correct result when executed on unreliable hardware. Rely allows developers to specify the reliability requirements for each value that a function produces. We present a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering. The analysis takes a Rely program with a reliability specification and a hardware specification, that characterizes the reliability of the underlying hardware components, and verifies that the program satisfies its reliability specification when executed on the underlying unreliable hardware platform. We demonstrate the application of quantitative reliability analysis on six computations implemented in Rely.This research was supported in part by the National Science Foundation (Grants CCF-0905244, CCF-1036241, CCF-1138967, CCF-1138967, and IIS-0835652), the United States Department of Energy (Grant DE-SC0008923), and DARPA (Grants FA8650-11-C-7192, FA8750-12-2-0110)

    Doctor of Philosophy

    Get PDF
    dissertationTrusted computing base (TCB) of a computer system comprises components that must be trusted in order to support its security policy. Research communities have identified the well-known minimal TCB principle, namely, the TCB of a system should be as small as possible, so that it can be thoroughly examined and verified. This dissertation is an experiment showing how small the TCB for an isolation service is based on software fault isolation (SFI) for small multitasking embedded systems. The TCB achieved by this dissertation includes just the formal definitions of isolation properties, instruction semantics, program logic, and a proof assistant, besides hardware. There is not a compiler, an assembler, a verifier, a rewriter, or an operating system in the TCB. To the best of my knowledge, this is the smallest TCB that has ever been shown for guaranteeing nontrivial properties of real binary programs on real hardware. This is accomplished by combining SFI techniques and high-confidence formal verification. An SFI implementation inserts dynamic checks before dangerous operations, and these checks provide necessary invariants needed by the formal verification to prove theorems about the isolation properties of ARM binary programs. The high-confidence assurance of the formal verification comes from two facts. First, the verification is based on an existing realistic semantics of the ARM ISA that is independently developed by Cambridge researchers. Second, the verification is conducted in a higher-order proof assistant-the HOL theorem prover, which mechanically checks every verification step by rigorous logic. In addition, the entire verification process, including both specification generation and verification, is automatic. To support proof automation, a novel program logic has been designed, and an automatic reasoning framework for verifying shallow safety properties has been developed. The program logic integrates Hoare-style reasoning and Floyd's inductive assertion reasoning together in a small set of definitions, which overcomes shortcomings of Hoare logic and facilitates proof automation. All inference rules of the logic are proven based on the instruction semantics and the logic definitions. The framework leverages abstract interpretation to automatically find function specifications required by the program logic. The results of the abstract interpretation are used to construct the function specifications automatically, and the specifications are proven without human interaction by utilizing intermediate theorems generated during the abstract interpretation. All these work in concert to create the very small TCB

    Test generation for high coverage with abstraction refinement and coarsening (ARC)

    Get PDF
    Testing is the main approach used in the software industry to expose failures. Producing thorough test suites is an expensive and error prone task that can greatly benefit from automation. Two challenging problems in test automation are generating test input and evaluating the adequacy of test suites: the first amounts to producing a set of test cases that accurately represent the software behavior, the second requires defining appropriate metrics to evaluate the thoroughness of the testing activities. Structural testing addresses these problems by measuring the amount of code elements that are executed by a test suite. The code elements that are not covered by any execution are natural candidates for generating further test cases, and the measured coverage rate can be used to estimate the thoroughness of the test suite. Several empirical studies show that test suites achieving high coverage rates exhibit a high failure detection ability. However, producing highly covering test suites automatically is hard as certain code elements are executed only under complex conditions while other might be not reachable at all. In this thesis we propose Abstraction Refinement and Coarsening (ARC), a goal oriented technique that combines static and dynamic software analysis to automatically generate test suites with high code coverage. At the core of our approach there is an abstract program model that enables the synergistic application of the different analysis components. In ARC we integrate Dynamic Symbolic Execution (DSE) and abstraction refinement to precisely direct test generation towards the coverage goals and detect infeasible elements. ARC includes a novel coarsening algorithm for improved scalability. We implemented ARC-B, a prototype tool that analyses C programs and produces test suites that achieve high branch coverage. Our experiments show that the approach effectively exploits the synergy between symbolic testing and reachability analysis outperforming state of the art test generation approaches. We evaluated ARC-B on industry relevant software, and exposed previously unknown failures in a safety-critical software component

    Backwards reasoning for model transformations: Method and applications

    Full text link
    This is the author’s version of a work that was accepted for publication in Journal of Systems and Software. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Journal of Systems and Software, VOL 116, (2016) DOI 10.1016/j.jss.2015.08.017Model transformations are key elements of Model Driven Engineering. Current challenges for transformation languages include improving usability (i.e., succinct means to express the transformation intent) and devising powerful analysis methods. In this paper, we show how backwards reasoning helps in both respects. The reasoning is based on a method that, given an OCL expression and a transformation rule, calculates a constraint that is satisfiable before the rule application if and only if the original OCL expression is satisfiable afterwards. With this method we can improve the usability of the rule execution process by automatically deriving suitable application conditions for a rule (or rule sequence) to guarantee that applying that rule does not break any integrity constraint (e.g. meta-model constraints). When combined with model finders, this method facilitates the validation, verification, testing and diagnosis of transformations, and we show several applications for both inplace and exogenous transformations.Work partially funded by the Spanish Ministry of Economy and Competitiveness (projects TIN2008-00444, TIN2011-24139 and TIN2014-52129-R), the Community of Madrid with project SICOMORO (S2013/ICE-3006), the EU Commission with project MONDO (FP7-ICT-2013-10, #611125) and a research grant from UOC-IN3 (Internet Interdisciplinary Institute). We would like to thank Hamza Ed-Douibi for his work on the tool implementation part, and the reviewers for their useful comments
    • 

    corecore