17 research outputs found

    A subset of precise UML for Model-based Testing

    Get PDF
    This paper presents an original model-based testing approach that takes a UML behavioural view of the system under test and automatically generates test cases and executable test scripts according to model coverage criteria. This approach is embedded in the LEIRIOS Test Designer tool and is currently deployed in domains such as Enterprise IT and electronic transaction applications. This model-based testing approach makes it possible to automatically produce the traceability matrix from requirements to test cases as part of the test generation process. This paper defines the subset of UML used for model-based testing and illustrates it using a small example

    Pengujian Perangkat Lunak Dengan Menggunakan Model Behaviour Uml

    Get PDF
    Pengujian perangkat lunak merupakan tahap keempat pada pengembangan perangkat lunak. Pengujian perangkat lunak dilakukan untuk mencari kesalahan perangkat lunak yang dikembangkan. Tahap analisis, desain dan implementasi perangkat lunak tidak menjamin bahwa perangkat lunak bebas kesalahan (fault free). Untuk mengurangi atau menghilangkan kesalahan pada perangkat lunak diperlukan suatu tahap pengujian untuk menemukan kesalahan-kesalahan yang ada pada perangkat lunak. UML, Unified Modelling Language sebagai Bahasa Pemodelan Terpadu mempunyai perangkat untuk memodelkan perangkat lunak memvisualisasikan use case, statis, dan perilaku perangkat lunak di dalam sistem. Pengujian perangkat lunak dengan menggunakan model behaviour UML dapat mengetahui kualitas perangkat lunak dalam sistem yang sedang dibangun

    Accelerated Finite State Machine Test Execution Using GPUs

    Get PDF

    Using UML Protocol State Machines in Conformance Testing of Components

    Get PDF
    In previous works we designed a comprehensive approach for conformance testing based on UML behavioral state machines. In this paper we propose two extensions to this approach. First, we apply our approach in the context of a component-based development, and address the problem of checking the interoperability of two connected components. Second, we address the problem of selecting relevant input sequences. Therefore we use UML protocol state machines to specify restricted environment models. This means that we restrict the valid protocol at the provided interface of the component under test with respect to a specific test purpose. Based on these models we select relevant input sequences. We implemented both extensions presented here in our TEAGER tool suite to show their applicability. Both extensions address the behavior at the interfaces of components. We use UML state machines as a unified notation for behavioral and protocol conformance testing as well as for test input selection. This considerably eases the work of test engineers

    One evaluation of model-based testing and its automation

    Get PDF
    Model-based testing relies on behavior models for the generation of model traces: input and expected output - test cases - for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than hand-crafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of model-based tests led to an 11% increase in detected errors

    Testing Strategies for Model-Based Development

    Get PDF
    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model

    Test orienté aspect : une approche formelle basée sur les diagrammes de collaboration

    Get PDF

    Web services robustness testing

    Get PDF
    Web services are a new paradigm for building software applications that has many advantages over the previous paradigms; however, Web Services are still not widely used because Service Requesters do not trust services that were built by others. Testing can assuage this problem because it can be used to assess the quality attributes of Web Services. This thesis proposes a framework and presents a proof of concept tool that can be used to test the robustness and other related attributes of a Web Service. The tool can be easily enhanced to assess other quality attributes. The framework is based on analyzing Web Services Description Language (WSDL) documents of Web Services to find what faults could affect the robustness quality attributes. After that using these faults to build test case generation rules to assess the robustness quality attribute of Web Services. This framework will give a better understanding of the faults that may affect the robustness quality attribute of Web Services, how these faults are related to the interface or the contract of a Web Service under test, and what testing techniques can be used to detect such faults. The approach used in this thesis for building test cases for Web Services was used with many examples in order to demonstrate its effectiveness; these examples have shown that the approach and the proof of concept tool are able to assess the robustness of Web Services implementation and Web Services platforms. Four hundred and two test clients were automatically built by the tool, based on the test cases rules, to assess the robustness of these Web Services examples. These test clients detected eleven robustness failures in the Web Services implementations and nine robustness failures in the Web Services platforms. Also the approach was able to help in comparing the robustness of two different Web Services platforms, namely Axis and GLUE. After deploying the same Web Services in both of these platforms; Axis showed less robustness and security failures than GLUE
    corecore