37 research outputs found
Exact Gap Computation for Code Coverage Metrics in ISO-C
Test generation and test data selection are difficult tasks for model based
testing. Tests for a program can be meld to a test suite. A lot of research is
done to quantify the quality and improve a test suite. Code coverage metrics
estimate the quality of a test suite. This quality is fine, if the code
coverage value is high or 100%. Unfortunately it might be impossible to achieve
100% code coverage because of dead code for example. There is a gap between the
feasible and theoretical maximal possible code coverage value. Our review of
the research indicates, none of current research is concerned with exact gap
computation. This paper presents a framework to compute such gaps exactly in an
ISO-C compatible semantic and similar languages. We describe an efficient
approximation of the gap in all the other cases. Thus, a tester can decide if
more tests might be able or necessary to achieve better coverage.Comment: In Proceedings MBT 2012, arXiv:1202.582
SAT-based Explicit LTL Reasoning
We present here a new explicit reasoning framework for linear temporal logic
(LTL), which is built on top of propositional satisfiability (SAT) solving. As
a proof-of-concept of this framework, we describe a new LTL satisfiability
tool, Aalta\_v2.0, which is built on top of the MiniSAT SAT solver. We test the
effectiveness of this approach by demonnstrating that Aalta\_v2.0 significantly
outperforms all existing LTL satisfiability solvers. Furthermore, we show that
the framework can be extended from propositional LTL to assertional LTL (where
we allow theory atoms), by replacing MiniSAT with the Z3 SMT solver, and
demonstrating that this can yield an exponential improvement in performance
A Fully Verified Executable LTL Model Checker
International audienceWe present an LTL model checker whose code has been completely verified using the Isabelle theorem prover. The checker consists of over 4000 lines of ML code. The code is produced using recent Isabelle technology called the Refinement Framework, which allows us to split its correctness proof into (1) the proof of an abstract version of the checker, consisting of a few hundred lines of “formalized pseudocode”, and (2) a verified refinement step in which mathematical sets and other abstract structures are replaced by implementations of efficient structures like red-black trees and functional arrays. This leads to a checker that, while still slower than unverified checkers, can already be used as a trusted reference implementation against which advanced implementations can be tested. We report on the structure of the checker, the development process, and some experiments on standard benchmarks
Directed Technical Change with Capital-Embodied Technologies: Implications for Climate Policy
We develop a theoretical model of directed technical change in which clean (zero emissions) and dirty (emissions-intensive) technologies are embodied in long-lived capital. We show how obsolescence costs generated by technological embodiment create inertia in a transition to clean growth. Optimal policies involve higher and longer-lasting clean R&D subsidies than when technologies are disembodied. From a low level, emissions taxes are initially increased rapidly, so they are higher in the long run. There is more warming. Introducing spillovers from an exogenous technological frontier representing non-energy-intensive technologies reduces mitigation costs. Optimal taxes and subsidies are lower and there is less warming
String solving with word equations and transducers: towards a logic for analysing mutation XSS
We study the fundamental issue of decidability of satisfiability over string logics with concatenations and finite-state transducers as atomic operations. Although restricting to one type of operations yields decidability, little is known about the decidability of their combined theory, which is especially relevant when analysing security vulnerabilities of dynamic web pages in a more realistic browser model. On the one hand, word equations (string logic with concatenations) cannot precisely capture sanitisation functions (e.g. htmlescape) and implicit browser transductions (e.g. innerHTML mutations). On the other hand, transducers suffer from the reverse problem of being able to model sanitisation functions and browser transductions, but not string concatenations. Naively combining word equations and transducers easily leads to an undecidable logic. Our main contribution is to show that the "straight-line fragment" of the logic is decidable (complexity ranges from PSPACE to EXPSPACE). The fragment can express the program logics of straight-line string-manipulating programs with concatenations and transductions as atomic operations, which arise when performing bounded model checking or dynamic symbolic executions. We demonstrate that the logic can naturally express constraints required for analysing mutation XSS in web applications. Finally, the logic remains decidable in the presence of length, letter-counting, regular, indexOf, and disequality constraints