945 research outputs found
Formal Verification of Probabilistic SystemC Models with Statistical Model Checking
Transaction-level modeling with SystemC has been very successful in
describing the behavior of embedded systems by providing high-level executable
models, in which many of them have inherent probabilistic behaviors, e.g.,
random data and unreliable components. It thus is crucial to have both
quantitative and qualitative analysis of the probabilities of system
properties. Such analysis can be conducted by constructing a formal model of
the system under verification and using Probabilistic Model Checking (PMC).
However, this method is infeasible for large systems, due to the state space
explosion. In this article, we demonstrate the successful use of Statistical
Model Checking (SMC) to carry out such analysis directly from large SystemC
models and allow designers to express a wide range of useful properties. The
first contribution of this work is a framework to verify properties expressed
in Bounded Linear Temporal Logic (BLTL) for SystemC models with both timed and
probabilistic characteristics. Second, the framework allows users to expose a
rich set of user-code primitives as atomic propositions in BLTL. Moreover,
users can define their own fine-grained time resolution rather than the
boundary of clock cycles in the SystemC simulation. The third contribution is
an implementation of a statistical model checker. It contains an automatic
monitor generation for producing execution traces of the
model-under-verification (MUV), the mechanism for automatically instrumenting
the MUV, and the interaction with statistical model checking algorithms.Comment: Journal of Software: Evolution and Process. Wiley, 2017. arXiv admin
note: substantial text overlap with arXiv:1507.0818
Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder
We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests
Augmenting American Fuzzy Lop to Increase the Speed of Bug Detection
Whitebox fuzz testing is a vital part of the software testing process in the software development life cycle (SDLC). It is used for bug detection and security vulnerability checking as well. But current tools lack the ability to detect all the bugs and cover the entire code under test in a reasonable time. This study will explore some of the various whitebox fuzzing techniques and tools (AFL, SAGE, Driller, etc.) currently in use followed by a discussion of their strategies and the challenges facing them. One of the most popular state-of-the-art fuzzers, American Fuzzy Lop (AFL) will be discussed in detail and the modifications proposed to reduce the time required by it while functioning under QEMU emulation mode will be put forth. The study found that the AFL fuzzer can be sped up by injecting an intermediary layer of code in the Tiny Code Generator (TCG) that helps in translating blocks between the two architectures being used for testing. The modified version of AFL was able to find a mean 1.6 crashes more than the basic AFL running in QEMU mode. The study will then recommend future research avenues in the form of hybrid techniques to resolve the challenges faced by the state of the art fuzzers and create an optimal fuzzing tool. The motivation behind the study is to optimize the fuzzing process in order to reduce the time taken to perform software testing and produce robust, error-free software products
Helium: lifting high-performance stencil kernels from stripped x86 binaries to halide DSL code
Highly optimized programs are prone to bit rot, where performance quickly becomes suboptimal in the face of new hardware and compiler techniques. In this paper we show how to automatically lift performance-critical stencil kernels from a stripped x86 binary and generate the corresponding code in the high-level domain-specific language Halide. Using Halide’s state-of-the-art optimizations targeting current hardware, we show that new optimized versions of these kernels can replace the originals to rejuvenate the application for newer hardware. The original optimized code for kernels in stripped binaries is nearly impossible to analyze statically. Instead, we rely on dynamic traces to regenerate the kernels. We perform buffer structure reconstruction to identify input, intermediate and output buffer shapes. We abstract from a forest of concrete dependency trees which contain absolute memory addresses to symbolic trees suitable for high-level code generation. This is done by canonicalizing trees, clustering them based on structure, inferring higher-dimensional buffer accesses and finally by solving a set of linear equations based on buffer accesses to lift them up to simple, high-level expressions. Helium can handle highly optimized, complex stencil kernels with input-dependent conditionals. We lift seven kernels from Adobe Photoshop giving a 75% performance improvement, four kernels from IrfanView, leading to 4.97× performance, and one stencil from the miniGMG multigrid benchmark netting a 4.25× improvement in performance. We manually rejuvenated Photoshop by replacing eleven of Photoshop’s filters with our lifted implementations, giving 1.12× speedup without affecting the user experience.United States. Dept. of Energy (Award DE-SC0005288)United States. Dept. of Energy (Award DE-SC0008923)United States. Defense Advanced Research Projects Agency (Agreement FA8759-14-2-0009)MIT Energy Initiative (Fellowship
Fault Localization in Multi-Threaded C Programs using Bounded Model Checking (extended version)
Software debugging is a very time-consuming process, which is even worse for
multi-threaded programs, due to the non-deterministic behavior of
thread-scheduling algorithms. However, the debugging time may be greatly
reduced, if automatic methods are used for localizing faults. In this study, a
new method for fault localization, in multi-threaded C programs, is proposed.
It transforms a multi-threaded program into a corresponding sequential one and
then uses a fault-diagnosis method suitable for this type of program, in order
to localize faults. The code transformation is implemented with rules and
context switch information from counterexamples, which are typically generated
by bounded model checkers. Experimental results show that the proposed method
is effective, in such a way that sequential fault-localization methods can be
extended to multi-threaded programs.Comment: extended version of paper published at SBESC'1
Using dynamic analysis of Java bytecode for evolutionary object-oriented unit testing
The focus of this paper is on presenting a methodology for
generating and optimizing test data by employing evolutionary search
techniques, with basis on the information provided by the analysis and
interpretation of Java bytecode and on the dynamic execution of the
instrumented test object.
The main reason to work at the bytecode level is that even when the source
code is unavailable, structural testing requirements can still be derived and
used to assess the quality of a given test set and to guide the evolutionary
search towards reaching specific test goals.
Java bytecode retains enough high-level information about the original source
code for an underlying model for program representation to be built. The
observations required to select or generate test data are obtained by
employing dynamic analysis techniques – i.e. by instrumenting, tracing and
analysing Java bytecode
- …