8,768 research outputs found

    Testing real-time systems using TINA

    Get PDF
    The paper presents a technique for model-based black-box conformance testing of real-time systems using the Time Petri Net Analyzer TINA. Such test suites are derived from a prioritized time Petri net composed of two concurrent sub-nets specifying respectively the expected behaviour of the system under test and its environment.We describe how the toolbox TINA has been extended to support automatic generation of time-optimal test suites. The result is optimal in the sense that the set of test cases in the test suite have the shortest possible accumulated time to be executed. Input/output conformance serves as the notion of implementation correctness, essentially timed trace inclusion taking environment assumptions into account. Test cases selection is based either on using manually formulated test purposes or automatically from various coverage criteria specifying structural criteria of the model to be fulfilled by the test suite. We discuss how test purposes and coverage criterion are specified in the linear temporal logic SE-LTL, derive test sequences, and assign verdicts

    Rule-based Test Generation with Mind Maps

    Full text link
    This paper introduces basic concepts of rule based test generation with mind maps, and reports experiences learned from industrial application of this technique in the domain of smart card testing by Giesecke & Devrient GmbH over the last years. It describes the formalization of test selection criteria used by our test generator, our test generation architecture and test generation framework.Comment: In Proceedings MBT 2012, arXiv:1202.582

    Towards the Usage of MBT at ETSI

    Full text link
    In 2012 the Specialists Task Force (STF) 442 appointed by the European Telcommunication Standards Institute (ETSI) explored the possibilities of using Model Based Testing (MBT) for test development in standardization. STF 442 performed two case studies and developed an MBT-methodology for ETSI. The case studies were based on the ETSI-standards GeoNetworking protocol (ETSI TS 102 636) and the Diameter-based Rx protocol (ETSI TS 129 214). Models have been developed for parts of both standards and four different MBT-tools have been employed for generating test cases from the models. The case studies were successful in the sense that all the tools were able to produce the test suites having the same test adequacy as the corresponding manually developed conformance test suites. The MBT-methodology developed by STF 442 is based on the experiences with the case studies. It focusses on integrating MBT into the sophisticated standardization process at ETSI. This paper summarizes the results of the STF 442 work.Comment: In Proceedings MBT 2013, arXiv:1303.037

    SmartUnit: Empirical Evaluations for Automated Unit Testing of Embedded Software in Industry

    Full text link
    In this paper, we aim at the automated unit coverage-based testing for embedded software. To achieve the goal, by analyzing the industrial requirements and our previous work on automated unit testing tool CAUT, we rebuild a new tool, SmartUnit, to solve the engineering requirements that take place in our partner companies. SmartUnit is a dynamic symbolic execution implementation, which supports statement, branch, boundary value and MC/DC coverage. SmartUnit has been used to test more than one million lines of code in real projects. For confidentiality motives, we select three in-house real projects for the empirical evaluations. We also carry out our evaluations on two open source database projects, SQLite and PostgreSQL, to test the scalability of our tool since the scale of the embedded software project is mostly not large, 5K-50K lines of code on average. From our experimental results, in general, more than 90% of functions in commercial embedded software achieve 100% statement, branch, MC/DC coverage, more than 80% of functions in SQLite achieve 100% MC/DC coverage, and more than 60% of functions in PostgreSQL achieve 100% MC/DC coverage. Moreover, SmartUnit is able to find the runtime exceptions at the unit testing level. We also have reported exceptions like array index out of bounds and divided-by-zero in SQLite. Furthermore, we analyze the reasons of low coverage in automated unit testing in our setting and give a survey on the situation of manual unit testing with respect to automated unit testing in industry.Comment: In Proceedings of 40th International Conference on Software Engineering: Software Engineering in Practice Track, Gothenburg, Sweden, May 27-June 3, 2018 (ICSE-SEIP '18), 10 page

    Report on the Standardization Project ``Formal Methods in Conformance Testing''

    Get PDF
    This paper presents the latest developments in the ā€œFormal Methods in Conformance Testingā€ (FMCT) project of ISO and ITUā€“T. The project has been initiated to study the role of formal description techniques in the conformance testing process. The goal is to develop a standard that defines the meaning of conformance in the context of formal description techniques. We give an account of the current status of FMCT in the standardization process as well as an overview of the technical status of the proposed standard. Moreover, we indicate some of its strong and weak points, and we give some directions for future work on FMCT

    A Semantic Framework for Test Coverage (Extended Version)

    Get PDF
    Since testing is inherently incomplete, test selection is of vital importance. Coverage measures evaluate the quality of a test suite and help the tester select test cases with maximal impact at minimum cost. Existing coverage criteria for test suites are usually defined in terms of syntactic characteristics of the implementation under test or its specification. Typical black-box coverage metrics are state and transition coverage of the specification. White-box testing often considers statement, condition and path coverage. A disadvantage of this syntactic approach is that different coverage figures are assigned to systems that are behaviorally equivalent, but syntactically different. Moreover, those coverage metrics do not take into account that certain failures are more severe than others, and that more testing effort should be devoted to uncover the most important bugs, while less critical system parts can be tested less thoroughly. This paper introduces a semantic approach to test coverage. Our starting point is a weighted fault model, which assigns a weight to each potential error in an implementation. We define a framework to express coverage measures that express how well a test suite covers such a specification, taking into account the error weight. Since our notions are semantic, they are insensitive to replacing a specification by one with equivalent behaviour.We present several algorithms that, given a certain minimality criterion, compute a minimal test suite with maximal coverage. These algorithms work on a syntactic representation of weighted fault models as fault automata. They are based on existing and novel optimization\ud problems. Finally, we illustrate our approach by analyzing and comparing a number of test suites for a chat protocol

    A Semantic Framework for Test Coverage

    Get PDF
    Since testing is inherently incomplete, test selection is of vital importance. Coverage measures evaluate the quality of a test suite and help the tester select test cases with maximal impact at minimum cost. Existing coverage criteria for test suites are usually defined in terms of syntactic characteristics of the implementation under test or its specification. Typical black-box coverage metrics are state and transition coverage of the specification. White-box testing often considers statement, condition and path coverage. A disadvantage of this syntactic approach is that different coverage figures are assigned to systems that are behaviorally equivalent, but syntactically different. Moreover, those coverage metrics do not take into account that certain failures are more severe than others, and that more testing effort should be devoted to uncover the most important bugs, while less critical system parts can be tested less thoroughly. This paper introduces a semantic approach to test coverage. Our starting point is a weighted fault model, which assigns a weight to each potential error in an implementation. We define a framework to express coverage measures that express how well a test suite covers such a specification, taking into account the error weight. Since our notions are semantic, they are insensitive to replacing a specification by one with equivalent behaviour.We present several algorithms that, given a certain minimality criterion, compute a minimal test suite with maximal coverage. These algorithms work on a syntactic representation of weighted fault models as fault automata. They are based on existing and novel optimization\ud problems. Finally, we illustrate our approach by analyzing and comparing a number of test suites for a chat protocol
    • ā€¦
    corecore