39 research outputs found

    Extensible synthetic file servers? or: Structuring the glue between tester and system under test

    Get PDF
    We discuss a few simple scenarios of how we can design and develop a compositional synthetic ļ¬le server that gives access to external processes ā€“ in particular, in the context of testing, gives access to the system under test ā€“ such that certain parts of said synthethic ļ¬le server can be prepared as oļ¬€-the-shelf components to which other speciļ¬cally written parts can be added in a kind of plug-and-play fashion.\ud \ud The approaches only deal with the problem of accessing the system under test from the point of view of oļ¬€ered functionality, and compositionality, but do not consider eļ¬ƒciency or performance. \ud \ud The study is rather preliminary, and only very limited practical experiments have been performed

    Experiments towards model-based testing using Plan 9: Labelled transition file systems, stacking file systems, on-the-fly coverage measuring

    Get PDF
    We report on experiments that we did on Plan 9/Inferno to gain more experience with the file-system-as-tool-interface approach. We reimplemented functionality that we earlier worked on in Unix, trying to use Plan 9 file system interfaces. The application domain for those experiments was model-based testing.\ud \ud The idea we wanted to experiment with consists of building small, reusable pieces of functionality which are then composed to achieve the intended functionality. In particular we want to experiment with the idea of 'stacking' file servers (fs) on top of each other, where the upper fs acts as a 'filter' on the data and structure provided by the lower fs.\ud \ud For this experiment we designed a file system interface (ltsfs) that gives fine-grained access to a labelled transition system, and made two implementations of it.\ud We developed a small fs that, when 'stacked' on top of the ltsfs, extends it with additional files, and an application that uses the resulting file system.\ud \ud The hope was that an interface like the one offered by ltsfs could be used as a general interface between (specification language specific) programs that give access to state spaces and (specification language independent) programs that use (walk) those state spaces like simulators, model checkers, or test derivation programs.\ud \ud Initial results (obtained on a less-than-modern machine) suggest that, although the approach by itself is definitely feasible in principle, in practice the fine-grained access offered by ltsfs may involve many file (9p) transactions which may seriously affect performance. In Unix we used a more conservative approach where the access was less fine-grained which likely explains why there we did not suffer from this problem.\ud \ud In addition we report on experiments to use acid to obtain coverage information that is updated on-the-fly while the program is running. This worked quite well. The main observation from those experiments is that the basic block notion of this approach, which has a more 'semantical' nature, differs from the more 'syntactical' nature of the basic block notion in Unix coverage measurement tools\ud like tcov or gcov

    JTorX: Exploring Model-Based Testing

    Get PDF
    The overall goal of the work described in this thesis is: ``To design a flexible tool for state-of-the-art model-based derivation and automatic application of black-box tests for reactive systems, usable both for education and outside an academic context.'' From this goal, we derive functional and non-functional design requirements. The core of the thesis is a discussion of the design, in which we show how the functional requirements are fulfilled. In addition, we provide evidence to validate the non-functional requirements, in the form of case studies and responses to a tool user questionnaire. We describe the overall architecture of our tool, and discuss three usage scenarios which are necessary to fulfill the functional requirements: random on-line testing, guided on-line testing, and off-line test derivation and execution. With on-line testing, test derivation and test execution takes place in an integrated manner: a next test step is only derived when it is necessary for execution. With random testing, during test derivation a random walk through the model is done. With guided testing, during test derivation additional (guidance) information is used, to guide the derivation through specific paths in the model. With off-line testing, test derivation and test execution take place as separate activities. In our architecture we identify two major components: a test derivation engine, which synthesizes test primitives from a given model and from optional test guidance information, and a test execution engine, which contains the functionality to connect the test tool to the system under test. We refer to this latter functionality as the ``adapter''. In the description of the test derivation engine, we look at the same three usage scenarios, and we discuss support for visualization, and for dealing with divergence in the model. In the description of the test execution engine, we discuss three example adapter instances, and then generalise this to a general adapter design. We conclude with a description of extensions to deal with symbolic treatment of data and time

    Publishing Your Prototype Tool on the Web: PUPTOL, a Framework

    Get PDF
    We describe an approach to reduce the effort involved in disseminating prototype academic (command-line) tools for wider use and inspection. This helps in preserving the effort involved in the development of such tools, and in raising the standards in computer science for conducting repeatable experiments. For this purpose we propose a light-weight, flexible framework to make such tools available via web forms

    The term processor Kimwitu:manual and cookbook

    Get PDF

    DFTCalc: a tool for efficient fault tree analysis (extended version)

    Get PDF
    Effective risk management is a key to ensure that our nuclear power plants, medical equipment, and power grids are dependable; and is often required by law. Fault Tree Analysis (FTA) is a widely used methodology here, computing important dependability measures like system reliability. This paper presents DFTCalc, a powerful tool for FTA, providing (1) efficient fault tree modelling via compact representations; (2) effective analysis, allowing a wide range of dependability properties to be analysed (3) efficient analysis, via state-of-the-art stochastic techniques; and (4) a flexible and extensible framework, where gates can easily be changed or added. Technically, DFTCalc is realised via stochastic model checking, an innovative technique offering a wide plethora of pow- erful analysis techniques, including aggressive compression techniques to keep the underlying state space small

    Lie in batch mode

    No full text
    corecore