142 research outputs found

    Lost in Abstraction: Monotonicity in Multi-Threaded Programs (Extended Technical Report)

    Full text link
    Monotonicity in concurrent systems stipulates that, in any global state, extant system actions remain executable when new processes are added to the state. This concept is not only natural and common in multi-threaded software, but also useful: if every thread's memory is finite, monotonicity often guarantees the decidability of safety property verification even when the number of running threads is unknown. In this paper, we show that the act of obtaining finite-data thread abstractions for model checking can be at odds with monotonicity: Predicate-abstracting certain widely used monotone software results in non-monotone multi-threaded Boolean programs - the monotonicity is lost in the abstraction. As a result, well-established sound and complete safety checking algorithms become inapplicable; in fact, safety checking turns out to be undecidable for the obtained class of unbounded-thread Boolean programs. We demonstrate how the abstract programs can be modified into monotone ones, without affecting safety properties of the non-monotone abstraction. This significantly improves earlier approaches of enforcing monotonicity via overapproximations

    Automated incremental software verification

    Get PDF
    Software continuously evolves to meet rapidly changing human needs. Each evolved transformation of a program is expected to preserve important correctness and security properties. Aiming to assure program correctness after a change, formal verification techniques, such as Software Model Checking, have recently benefited from fully automated solutions based on symbolic reasoning and abstraction. However, the majority of the state-of-the-art model checkers are designed that each new software version has to be verified from scratch. In this dissertation, we investigate the new Formal Incremental Verification (FIV) techniques that aim at making software analysis more efficient by reusing invested efforts between verification runs. In order to show that FIV can be built on the top of different verification techniques, we focus on three complementary approaches to automated formal verification. First, we contribute the FIV technique for SAT-based Bounded Model Checking developed to verify programs with (possibly recursive) functions with respect to the set of pre-defined assertions. We present the function-summarization framework based on Craig interpolation that allows extracting and reusing over- approximations of the function behaviors. We introduce the algorithm to revalidate the summaries of one program locally in order to prevent re-verification of another program from scratch. Second, we contribute the technique for simulation relation synthesis for loop-free programs that do not necessarily contain assertions. We introduce an SMT-based abstraction- refinement algorithm that proceeds by guessing a relation and checking whether it is a simulation relation. We present a novel algorithm for discovering simulations symbolically, by means of solving ∀∃-formulas and extracting witnessing Skolem relations. Third, we contribute the FIV technique for SMT-based Unbounded Model Checking developed to verify programs with (possibly nested) loops. We present an algorithm that automatically derives simulations between programs with different loop structures. The automatically synthesized simulation relation is then used to migrate the safe inductive invariants across the evolution boundaries. Finally, we contribute the implementation and evaluation of all our algorithmic contributions, and confirm that the state-of-the-art model checking tools can successfully be extended by the FIV capabilities

    Unbounded Scalable Hardware Verification.

    Full text link
    Model checking is a formal verification method that has been successfully applied to real-world hardware and software designs. Model checking tools, however, encounter the so-called state-explosion problem, since the size of the state spaces of such designs is exponential in the number of their state elements. In this thesis, we address this problem by exploiting the power of two complementary approaches: (a) counterexample-guided abstraction and refinement (CEGAR) of the design's datapath; and (b) the recently-introduced incremental induction algorithms for approximate reachability. These approaches are well-suited for the verification of control-centric properties in hardware designs consisting of wide datapaths and complex control logic. They also handle most complex design errors in typical hardware designs. Datapath abstraction prunes irrelevant bit-level details of datapath elements, thus greatly reducing the size of the state space that must be analyzed and allowing the verification to be focused on the control logic, where most errors originate. The induction-based approximate reachability algorithms offer the potential of significantly reducing the number of iterations needed to prove/disprove given properties by avoiding the implicit or explicit enumeration of reachable states. Our implementation of this verification framework, which we call the Averroes system, extends the approximate reachability algorithms at the bit level to first-order logic with equality and uninterpreted functions. To facilitate this extension, we formally define the solution space and state space of the abstract transition system produced by datapath abstraction. In addition, we develop an efficient way to represent sets of abstract solutions involving present- and next-states and a systematic way to project such solutions onto the space of just the present-state variables. To further increase the scalability of the Averroes verification system, we introduce the notion of structural abstraction, which extends datapath abstraction with two optimizations for better classification of state variables as either datapath or control, and with efficient memory abstraction techniques. We demonstrate the scalability of this approach by showing that Averroes significantly outperforms bit-level verification on a number of industrial benchmarks.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133375/1/suholee_1.pd

    IST Austria Thesis

    Get PDF
    This dissertation concerns the automatic verification of probabilistic systems and programs with arrays by statistical and logical methods. Although statistical and logical methods are different in nature, we show that they can be successfully combined for system analysis. In the first part of the dissertation we present a new statistical algorithm for the verification of probabilistic systems with respect to unbounded properties, including linear temporal logic. Our algorithm often performs faster than the previous approaches, and at the same time requires less information about the system. In addition, our method can be generalized to unbounded quantitative properties such as mean-payoff bounds. In the second part, we introduce two techniques for comparing probabilistic systems. Probabilistic systems are typically compared using the notion of equivalence, which requires the systems to have the equal probability of all behaviors. However, this notion is often too strict, since probabilities are typically only empirically estimated, and any imprecision may break the relation between processes. On the one hand, we propose to replace the Boolean notion of equivalence by a quantitative distance of similarity. For this purpose, we introduce a statistical framework for estimating distances between Markov chains based on their simulation runs, and we investigate which distances can be approximated in our framework. On the other hand, we propose to compare systems with respect to a new qualitative logic, which expresses that behaviors occur with probability one or a positive probability. This qualitative analysis is robust with respect to modeling errors and applicable to many domains. In the last part, we present a new quantifier-free logic for integer arrays, which allows us to express counting. Counting properties are prevalent in array-manipulating programs, however they cannot be expressed in the quantified fragments of the theory of arrays. We present a decision procedure for our logic, and provide several complexity results

    Counterexample Generation in Probabilistic Model Checking

    Get PDF
    Providing evidence for the refutation of a property is an essential, if not the most important, feature of model checking. This paper considers algorithms for counterexample generation for probabilistic CTL formulae in discrete-time Markov chains. Finding the strongest evidence (i.e., the most probable path) violating a (bounded) until-formula is shown to be reducible to a single-source (hop-constrained) shortest path problem. Counterexamples of smallest size that deviate most from the required probability bound can be obtained by applying (small amendments to) k-shortest (hop-constrained) paths algorithms. These results can be extended to Markov chains with rewards, to LTL model checking, and are useful for Markov decision processes. Experimental results show that typically the size of a counterexample is excessive. To obtain much more compact representations, we present a simple algorithm to generate (minimal) regular expressions that can act as counterexamples. The feasibility of our approach is illustrated by means of two communication protocols: leader election in an anonymous ring network and the Crowds protocol

    Learning Program Specifications from Sample Runs

    Get PDF
    With science fiction of yore being reality recently with self-driving cars, wearable computers and autonomous robots, software reliability is growing increasingly important. A critical pre-requisite to ensure the software that controls such systems is correct is the availability of precise specifications that describe a program\u27s intended behaviors. Generating these specifications manually is a challenging, often unsuccessful, exercise; unfortunately, existing static analysis techniques often produce poor quality specifications that are ineffective in aiding program verification tasks. In this dissertation, we present a recent line of work on automated synthesis of specifications that overcome many of the deficiencies that plague existing specification inference methods. Our main contribution is a formulation of the problem as a sample driven one, in which specifications, represented as terms in a decidable refinement type representation, are discovered from observing a program\u27s sample runs in terms of either program execution paths or input-output values, and automatically verified through the use of expressive refinement type systems. Our approach is realized as a series of inductive synthesis frameworks, which use various logic-based or classification-based learning algorithms to provide sound and precise machine-checked specifications. Experimental results indicate that the learning algorithms are both efficient and effective, capable of automatically producing sophisticated specifications in nontrivial hypothesis domains over a range of complex real-world programs, going well beyond the capabilities of existing solutions

    Test generation for high coverage with abstraction refinement and coarsening (ARC)

    Get PDF
    Testing is the main approach used in the software industry to expose failures. Producing thorough test suites is an expensive and error prone task that can greatly benefit from automation. Two challenging problems in test automation are generating test input and evaluating the adequacy of test suites: the first amounts to producing a set of test cases that accurately represent the software behavior, the second requires defining appropriate metrics to evaluate the thoroughness of the testing activities. Structural testing addresses these problems by measuring the amount of code elements that are executed by a test suite. The code elements that are not covered by any execution are natural candidates for generating further test cases, and the measured coverage rate can be used to estimate the thoroughness of the test suite. Several empirical studies show that test suites achieving high coverage rates exhibit a high failure detection ability. However, producing highly covering test suites automatically is hard as certain code elements are executed only under complex conditions while other might be not reachable at all. In this thesis we propose Abstraction Refinement and Coarsening (ARC), a goal oriented technique that combines static and dynamic software analysis to automatically generate test suites with high code coverage. At the core of our approach there is an abstract program model that enables the synergistic application of the different analysis components. In ARC we integrate Dynamic Symbolic Execution (DSE) and abstraction refinement to precisely direct test generation towards the coverage goals and detect infeasible elements. ARC includes a novel coarsening algorithm for improved scalability. We implemented ARC-B, a prototype tool that analyses C programs and produces test suites that achieve high branch coverage. Our experiments show that the approach effectively exploits the synergy between symbolic testing and reachability analysis outperforming state of the art test generation approaches. We evaluated ARC-B on industry relevant software, and exposed previously unknown failures in a safety-critical software component

    Formal Verification and Validation of ERTMS Industrial Railway Train Spacing System

    Full text link
    Abstract. Formal verification and validation is a fundamental step for the certifi-cation of railways critical systems. Many railways safety standards (e.g. the CEN-ELEC EN-50126, EN-50128 and EN-50129 standards implement the mandatory safety requirements of IEC-61508-7 standard for Functional and Safety) currently mandate the use of formal methods in the design to certify correctness. In this paper we describe an industrial application of formal methods for the ver-ification and validation of “Logica di Sicurezza ” (LDS), the safety logic of a railways ERTMS Level 2 system developed by Ansaldo-STS. LDS is a generic control software that needs to be instantiated on a railways network configuration. We developed a methodology for the verification and validation of a critical sub-set of LDS deployed on typical realistic railways network configurations. To show feasibility, effectiveness and scalability, we have experimented with several state of the art symbolic software model checking techniques and tools on different network configurations. From the experiments, we have successfully identified an effective strategy for the verification and validation of our case study. More-over, the results of experiments show that formal verification and validation is feasible and effective, and also scales reasonably well with the size of the config-uration. Given the results, Ansaldo-STS is currently integrating the methodology in its internal Development and Verification & Validation Flow.
    corecore