271 research outputs found

    Experiments towards model-based testing using Plan 9: Labelled transition file systems, stacking file systems, on-the-fly coverage measuring

    Get PDF
    We report on experiments that we did on Plan 9/Inferno to gain more experience with the file-system-as-tool-interface approach. We reimplemented functionality that we earlier worked on in Unix, trying to use Plan 9 file system interfaces. The application domain for those experiments was model-based testing.\ud \ud The idea we wanted to experiment with consists of building small, reusable pieces of functionality which are then composed to achieve the intended functionality. In particular we want to experiment with the idea of 'stacking' file servers (fs) on top of each other, where the upper fs acts as a 'filter' on the data and structure provided by the lower fs.\ud \ud For this experiment we designed a file system interface (ltsfs) that gives fine-grained access to a labelled transition system, and made two implementations of it.\ud We developed a small fs that, when 'stacked' on top of the ltsfs, extends it with additional files, and an application that uses the resulting file system.\ud \ud The hope was that an interface like the one offered by ltsfs could be used as a general interface between (specification language specific) programs that give access to state spaces and (specification language independent) programs that use (walk) those state spaces like simulators, model checkers, or test derivation programs.\ud \ud Initial results (obtained on a less-than-modern machine) suggest that, although the approach by itself is definitely feasible in principle, in practice the fine-grained access offered by ltsfs may involve many file (9p) transactions which may seriously affect performance. In Unix we used a more conservative approach where the access was less fine-grained which likely explains why there we did not suffer from this problem.\ud \ud In addition we report on experiments to use acid to obtain coverage information that is updated on-the-fly while the program is running. This worked quite well. The main observation from those experiments is that the basic block notion of this approach, which has a more 'semantical' nature, differs from the more 'syntactical' nature of the basic block notion in Unix coverage measurement tools\ud like tcov or gcov

    Analysis and representation of test cases generated from LOTOS

    Get PDF
    Cataloged from PDF version of article.This paper presents a method to generate, analyse and represent test cases from protocol specification. The language of temporal ordering specification (LOTOS) is mapped into an extended finite state machine (EFSM). Test cases are generated from EFSM. The generated test cases are modelled as a dependence graph. Predicate slices are used to identify infeasible test cases that must be eliminated. Redundant assignments and predicates in all the feasible test cases are removed by reducing the test case dependence graph. The reduced test case dependence graph is adapted for a local single-layer (LS) architecture. The reduced test cases for the LS architecture are enhanced to represent the tester's behaviour. The dynamic behaviour of the test cases is represented in the form of control graphs by inverting the events, assigning verdicts to the events in the enhanced dependence graph. Ā© 1995

    Formal description techniques for distributed computing systems:the challenges for the 1990's

    Get PDF
    Initially FDTs where developed within IS0 and CCITT for specification, at a high-level of abstraction, of distributed systems. Research is now being performed on the use of FDTs to support the complete implementation trajectory. In this paper we discuss a number of such research activities that are conducted within the framework of the Lotosphere project(*). The paper discusses aspects of design methodology, correctness preserving transformation, the reflection of design criteria, the role of pre-defined specification and implementation constructs, and formal approaches to conformance testing. Furthermore some insight is given in the development of a comprehensive toolset that supports these aspects of design methodology. The paper concludes with some experience obtained from the application of these methods and tools to some realistic pilot implementations: an ISDN and MHS application and a Transaction Processing application

    JTorX: Exploring Model-Based Testing

    Get PDF
    The overall goal of the work described in this thesis is: ``To design a flexible tool for state-of-the-art model-based derivation and automatic application of black-box tests for reactive systems, usable both for education and outside an academic context.'' From this goal, we derive functional and non-functional design requirements. The core of the thesis is a discussion of the design, in which we show how the functional requirements are fulfilled. In addition, we provide evidence to validate the non-functional requirements, in the form of case studies and responses to a tool user questionnaire. We describe the overall architecture of our tool, and discuss three usage scenarios which are necessary to fulfill the functional requirements: random on-line testing, guided on-line testing, and off-line test derivation and execution. With on-line testing, test derivation and test execution takes place in an integrated manner: a next test step is only derived when it is necessary for execution. With random testing, during test derivation a random walk through the model is done. With guided testing, during test derivation additional (guidance) information is used, to guide the derivation through specific paths in the model. With off-line testing, test derivation and test execution take place as separate activities. In our architecture we identify two major components: a test derivation engine, which synthesizes test primitives from a given model and from optional test guidance information, and a test execution engine, which contains the functionality to connect the test tool to the system under test. We refer to this latter functionality as the ``adapter''. In the description of the test derivation engine, we look at the same three usage scenarios, and we discuss support for visualization, and for dealing with divergence in the model. In the description of the test execution engine, we discuss three example adapter instances, and then generalise this to a general adapter design. We conclude with a description of extensions to deal with symbolic treatment of data and time
    • ā€¦
    corecore