96,438 research outputs found

    Testing Strategies for Model-Based Development

    Get PDF
    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model

    Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness

    Get PDF
    In black-box testing, the tester creates a set of tests to exercise a system under test without regard to the internal structure of the system. Generally, no objective metric is used to measure the adequacy of black-box tests. In recent work, we have proposed three requirements coverage metrics, allowing testers to objectively measure the adequacy of a black-box test suite with respect to a set of requirements formalized as Linear Temporal Logic (LTL) properties. In this report, we evaluate the effectiveness of these coverage metrics with respect to fault finding. Specifically, we conduct an empirical study to investigate two questions: (1) do test suites satisfying a requirements coverage metric provide better fault finding than randomly generated test suites of approximately the same size?, and (2) do test suites satisfying a more rigorous requirements coverage metric provide better fault finding than test suites satisfying a less rigorous requirements coverage metric? Our results indicate (1) only one coverage metric proposed -- Unique First Cause (UFC) coverage -- is sufficiently rigorous to ensure test suites satisfying the metric outperform randomly generated test suites of similar size and (2) that test suites satisfying more rigorous coverage metrics provide better fault finding than test suites satisfying less rigorous coverage metrics

    Assessing Requirements Quality Through Requirements Coverage

    Get PDF
    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques

    Testauksen laadunvarmistaminen : case testauksen mittarit

    Get PDF
    Tämä opinnäytetyö tehtiin suomalaisessa pankki- ja rahoitusalan yrityksen ICT-palveluyksikössä, joka kehittää itse tai jolle ulkopuolinen toimittaja tuottaa muutoksia ja uusia ominaisuuksia käytössä oleviin järjestelmiin. Tapaustutkimuksessa tutkittiin erisuuruisten kehitysprojektien testauksen seurannalle määriteltyjä mittaristoja ja niiden antamien tulosten merkitystä testattavan järjestelmän laadulle. Ennen tutkimuksen alkua määriteltiin käytettävät mittarit ja testauksen läpiviennin aikana analysoitiin mittareiden tuloksia. Tulosten perusteella pyrittiin määrittelemään käytettävien mittareiden luotettavuutta määriteltäessä testauksen laatua ja testauksen aikaisen prosessin tehokkuutta. Testausprosesseina tutkittiin sekä perinteisellä vesiputousmallilla, että ketterällä mallilla tehtävää tuotekehitystä. Kummallakin mallilla toteutettujen testattavien projektien testauksen seuranta tehtiin samalla työvälineellä Quality Centerillä. Tutkimuksen tavoitteena oli selvittää, mitkä määritellyistä mittareista antavat luotettavinta tietoa testauksen aikana, jotta testattavan järjestelmän tuotantoon viemisen kypsyys saadaan mahdollisimman luotettavasti todennettua. Analyysi perustuu tutkimuksen tekijän omakohtaisiin analyyseihin saaduista tuloksista. Parhaiksi mittareiksi organisaation testausyksikön laadun varmistukseen valittiin kolme mittaria,jotka todentavat järjestelmätestauksen kattavuuden, seuraavat testauksen etenemistä ja keräävät tietoa testauksen suorittamisen aikana havaituista virhekirjauksista. Valitut mittarit on tarkoitus edelleen esittää testausyksikön yhteisesti noudatettaviksi mittareiksi. Niiden käyttöönotosta yksikön mittaristoissa päätetään erikseen riippumatta tämän tutkimuksen antamista analyyseistä yksikön johdon toimesta.Quality assurance in testing: a case study of metrics in testing This research was carried out in a Finnish banking and financial company which maintains and develops new features to systems itself or by using an outside contractor. This case study describes metrics defined for monitoring of testing and the implications metrics can give to the quality of testing. Prior to starting the study, the metrics used were defined and during the testing process the results given by the metrics were analysed. Based on the measurement results the purpose was to evaluate the reliability of the metrics used in defining the quality and effectiveness of the testing process. Both agility and traditional waterfall models of testing were studied. The testing process carried out with either of the models was executed using the same tool: Quality Center. The main objective of the study was to figure out, which of the defined metric s will give the most reliable information during the testing process in order to verify the maturity of the system tested. The analysis is based on the writer’s own results. As the best metrics for quality in testing process were chosen metrics measuring requirements coverage, progression of test execution and the number of defects detected. The three best metrics are supposed to be recommended for common use in the testing organization. Introduction of the presented metrics will be determined separately despite of this research by the management of the testing organization, regardless of this research

    Methods and metrics for selective regression testing

    Get PDF
    In corrective software maintenance, selective regression testing includes test selection from previously-run test suites and test coverage identification. We propose three reduction-based regression test selection methods and two McCabe-based coverage identification metrics (T. McCabe, 1976). We empirically compare these methods with three other reduction- and precision-oriented methods, using 60 test problems. The comparison shows that our proposed methods yield favourable result

    Exact Gap Computation for Code Coverage Metrics in ISO-C

    Full text link
    Test generation and test data selection are difficult tasks for model based testing. Tests for a program can be meld to a test suite. A lot of research is done to quantify the quality and improve a test suite. Code coverage metrics estimate the quality of a test suite. This quality is fine, if the code coverage value is high or 100%. Unfortunately it might be impossible to achieve 100% code coverage because of dead code for example. There is a gap between the feasible and theoretical maximal possible code coverage value. Our review of the research indicates, none of current research is concerned with exact gap computation. This paper presents a framework to compute such gaps exactly in an ISO-C compatible semantic and similar languages. We describe an efficient approximation of the gap in all the other cases. Thus, a tester can decide if more tests might be able or necessary to achieve better coverage.Comment: In Proceedings MBT 2012, arXiv:1202.582
    corecore