73,511 research outputs found

    Automatic vector generation guided by a functional metric

    Get PDF
    Verification is still the bottleneck of the complex digital system design process. Formal techniques have advanced in their capacity to handle more complex descriptions, but they still suffer from problems of memory or time explosion. Simulation-based techniques handle descriptions of any size or complexity, but the efficiency of these techniques is reduced with the increase in the system complexity because of the exponential increase in the number of simulation tests necessary to maintain the coverage. Semi-formal techniques combine the advantages of simulation and formal techniques as they increase the efficiency of simulation-based verification. In this area, several research works have introduced techniques that automate the generation of vectors driven by traditional coverage metrics. However, these techniques do not ensure the detection of 100% of faults. This paper presents a novel technique for the generation of vectors. A major benefit of the technique is the more efficient generation of test-benches than when using techniques based on structural metrics. The technique introduced is more efficient since it relies on a novel coverage metric, which is more directly correlated to functional faults than structural coverage metrics (line, branch, etc.). The proposed coverage metric is based on an abstraction of the system as a set of polynomials where all system behaviours are described by a set of coefficients. By assuming a finite precision of coefficients and a maximum degree of polynomials, all the system behaviors, including both the correct and the incorrect ones, can be modeled. This technique applies mathematical theories (computer algebra and number theory) to calculate the coverage and to generate vectors which maximize coverage. Moreover, in this work, a tool which implements the technique has been developed. This tool takes a C-based system description and provides the coverage and the generated vectors as output

    Testing reactive systems with data : enumerative methods and constraint solving

    Get PDF
    Software faults are a well-known phenomenon. In most cases, they are just annoying – if the computer game does not work as expected – or expensive – if once again a space project fails due to some faulty data conversion. In critical systems, however, faults can have life-threatening consequences. It is the task of software quality assurance to avoid such faults, but this is a cumbersome, expensive and also erroneous undertaking. For this reason, research has been done over the last years in order to automate this task as much as possible. In this thesis, the connection of constraint solving techniques with formal methods is investigated. We have the goal to ��?nd faults in the models and implementations of reactive systems with data, such as automatic teller machines (ATMs). In order to do so, we ��?rst develop a translation of formal speci��?cations in the process algebra µCRL to a constraint logic program (CLP). In the course of this translation, we pay special attention on the fact that the CLP together with the constraint solver correctly simulates the underlying term rewriting system. One way to validate a system is the test whether this system conforms its speci��?cation. In this thesis, we develop a test process to automatically generate and execute test cases for the conformance test of data-oriented systems. The applicability of this process to process-oriented software systems is demonstrated in a case study with an ATM as the system under test. The applicability of the process to document-centered applications is shown by means of the open source web browser Mozilla Firefox. The test process is partially based on the tool TGV, which is an enumerative test case generator. It generates test cases from a system speci��?cation and a test purpose. An enumerative approach to the analysis of system speci��?cations always tries to enumerate all possible combinations of values for the system’s data elements, i.e. the system’s states. The states of those systems, which we regard here, are influenced by data of possibly in��?nite domains. Hence, the state space of such systems grows beyond all limits, it explodes, and cannot be handled anymore by enumerative algorithms. For this reason, the state space is limited prior to test case generation by a data abstraction. We use a chaotic abstraction here with all possible input data from a system’s environment being replaced by a single constant. In parallel, we generate a CLP from the system speci��?cation. With this CLP, we reintroduce the actual data at the time of test execution. This approach does not only limit the state space of the system, but also leads to a separation of system behavior and data. This allows to reuse test cases by only varying their data parameters. In the developed process, tests are executed by the tool BAiT. This tool has also been created in the course of this thesis. Some systems do not always show an identical behavior under the same circumstances. This phenomenon is known as nondeterminism. There are many reasons for nondeterminism. In most cases, input froma system’s environment is asynchronously processed by several components of the system, which do not always terminate in the same order. BAiT works as follows: The tool chooses a trace through the system behavior from the set of traces in the generated test cases. Then, it parameterizes this trace with data and tries to execute it. When the nondeterministic system digresses from the selected trace, BAiT tries to appropriately adapt it. If this can be done according to the system speci��?cation, the test can be executed further and a possibly false positive test verdict has been successfully avoided. The test of an implementation signi��?cantly reduces the numbers of faults in a system. However, the system is only tested against its speci��?cation. In many cases, this speci��?cation already does not completely ful��?ll a customer ’s expectations. In order to reduce the risk for faults further, the models of the system themselves also have to be veri��?ed. This happens during model checking prior to testing the software. Again, the explosion of the state space of the system must be avoided by a suitable abstraction of the models. A consequence of model abstractions in the context of model checking are so-called false negatives. Those traces are counterexamples which point out a fault in the abstracted model, but who do not exist in the concrete one. Usually, these false negatives are ignored. In this thesis, we also develop a methodology to reuse the knowledge of potential faults by abstracting the counterexamples further and deriving a violation pattern from it. Afterwards, we search for a concrete counterexample utilizing a constraint solver

    JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction

    Get PDF
    Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally

    SPDL Model Checking via Property-Driven State Space Generation

    Get PDF
    In this report we describe how both, memory and time requirements for stochastic model checking of SPDL (stochastic propositional dynamic logic) formulae can significantly be reduced. SPDL is the stochastic extension of the multi-modal program logic PDL.\ud SPDL provides means to specify path-based properties with or without timing restrictions. Paths can be characterised by so-called programs, essentially regular expressions, where the executability can be made dependent on the validity of test formulae. For model-checking SPDL path formulae it is necessary to build a product transition system (PTS)\ud between the system model and the program automaton belonging to the path formula that is to be verified.\ud In many cases, this PTS can be drastically reduced during the model checking procedure, as the program restricts the number of potentially satisfying paths. Therefore, we propose an approach that directly generates the reduced PTS from a given SPA specification and an SPDL path formula.\ud The feasibility of this approach is shown through a selection of case studies, which show enormous state space reductions, at no increase in generation time.\u

    Exploiting Query Structure and Document Structure to Improve Document Retrieval Effectiveness

    Get PDF
    In this paper we present a systematic analysis of document retrieval using unstructured and structured queries within the score region algebra (SRA) structured retrieval framework. The behavior of di®erent retrieval models, namely Boolean, tf.idf, GPX, language models, and Okapi, is tested using the transparent SRA framework in our three-level structured retrieval system called TIJAH. The retrieval models are implemented along four elementary retrieval aspects: element and term selection, element score computation, score combination, and score propagation. The analysis is performed on a numerous experiments evaluated on TREC and CLEF collections, using manually generated unstructured and structured queries. Unstructured queries range from the short title queries to long title + description + narrative queries. For generating structured queries we exploit the knowledge of the document structure and the content used to semantically describe or classify documents. We show that such structured information can be utilized in retrieval engines to give more precise answers to user queries then when using unstructured queries
    corecore