3,691,212 research outputs found

    A Taxonomy of model-based testing

    Get PDF
    Model-based testing relies on models of a system under test and/or its environment to derive test cases for the system. This paper provides an overview of the field. Seven different dimensions define a taxonomy that allows the characterization of different approaches to model-based testing. It is intended to help with understanding benefits and limitations of model-based testing, understanding the approach used in a particular model-based testing tool, and understanding the issues involved in integrating model-based testing into a software development process. To illustrate the taxonomy, we classify several approaches embedded in existing model-based testing tools

    Model-Based Security Testing

    Full text link
    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST) is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.Comment: In Proceedings MBT 2012, arXiv:1202.582

    Model-based Testing

    Get PDF
    This paper provides a comprehensive introduction to a framework for formal testing using labelled transition systems, based on an extension and reformulation of the ioco theory introduced by Tretmans. We introduce the underlying models needed to specify the requirements, and formalise the notion of test cases. We discuss conformance, and in particular the conformance relation ioco. For this relation we prove several interesting properties, and we provide algorithms to derive test cases (either in batches, or on the fly)

    UI-Design driven model-based testing

    Get PDF
    Testing interactive systems is notoriously difficult. Not only do we need to ensure that the functionality of the developed system is correct with respect to the requirements and specifications, we also need to ensure that the user interface to the system is correct (enables a user to access the functionality correctly) and is usable. These different requirements of interactive system testing are not easily combined within a single testing strategy. We investigate the use of models of interactive systems, which have been derived from design artefacts, as the basis for generating tests for an implemented system. We give a model-based method for testing interactive systems which has low overhead in terms of the models required and which enables testing of UI and system functionality from the perspective of user interaction

    Analysis of Testing-Based Forward Model Selection

    Full text link
    This paper introduces and analyzes a procedure called Testing-based forward model selection (TBFMS) in linear regression problems. This procedure inductively selects covariates that add predictive power into a working statistical model before estimating a final regression. The criterion for deciding which covariate to include next and when to stop including covariates is derived from a profile of traditional statistical hypothesis tests. This paper proves probabilistic bounds, which depend on the quality of the tests, for prediction error and the number of selected covariates. As an example, the bounds are then specialized to a case with heteroskedastic data, with tests constructed with the help of Huber-Eicker-White standard errors. Under the assumed regularity conditions, these tests lead to estimation convergence rates matching other common high-dimensional estimators including Lasso

    Atomic Action Refinement in Model Based Testing

    Get PDF
    In model based testing (MBT) test cases are derived from a specification of the system that we want to test. In general the specification is more abstract than the implementation. This may result in 1) test cases that are not executable, because their actions are too abstract (the implementation does not understand them); or 2) test cases that are incorrect, because the specification abstracts from relevant behavior. The standard approach to remedy this problem is to rewrite the specification by hand to the required level of detail and regenerate the test cases. This is error-prone and time consuming. Another approach is to do some translation during test execution. This solution has no basis in the theory of MBT. We propose a framework to add the required level of detail automatically to the abstract specification and/or abstract test cases.\ud \ud This paper focuses on general atomic action refinement. This means that an abstract action is replaced by more complex behavior (expressed as a labeled transition system). With general we mean that we impose as few restrictions as possible. Atomic means that the actions that are being refined behave as if they were atomic, i.e., no other actions are allowed to interfere
    corecore