210 research outputs found

    Fault Detection Effectiveness of Metamorphic Relations Developed for Testing Supervised Classifiers

    Full text link
    In machine learning, supervised classifiers are used to obtain predictions for unlabeled data by inferring prediction functions using labeled data. Supervised classifiers are widely applied in domains such as computational biology, computational physics and healthcare to make critical decisions. However, it is often hard to test supervised classifiers since the expected answers are unknown. This is commonly known as the \emph{oracle problem} and metamorphic testing (MT) has been used to test such programs. In MT, metamorphic relations (MRs) are developed from intrinsic characteristics of the software under test (SUT). These MRs are used to generate test data and to verify the correctness of the test results without the presence of a test oracle. Effectiveness of MT heavily depends on the MRs used for testing. In this paper we have conducted an extensive empirical study to evaluate the fault detection effectiveness of MRs that have been used in multiple previous studies to test supervised classifiers. Our study uses a total of 709 reachable mutants generated by multiple mutation engines and uses data sets with varying characteristics to test the SUT. Our results reveal that only 14.8\% of these mutants are detected using the MRs and that the fault detection effectiveness of these MRs do not scale with the increased number of mutants when compared to what was reported in previous studies.Comment: 8 pages, AITesting 201

    Testing scientific software: techniques for automatic detection of metamorphic relations

    Get PDF
    2015 Spring.Includes bibliographical references.Scientific software plays an important role in critical decision making in fields such as the nuclear industry, medicine, and the military. Systematic testing of such software can help to ensure that it works as expected. Comprehensive, automated software testing requires an oracle to check whether the output produced by a test case matches the expected behavior of the program. But the challenges in creating suitable oracles limit the ability to perform automated testing of scientific software. For some programs, creating an oracle may be not possible since the correct output is not known a priori. Further, it may be impractical to implement an oracle for an arbitrary input due to the complexity of a program. The software testing community refers to such programs as non-testable. Many scientific programs fall into this category of non-testable programs, since they are either written to find answers that are previously unknown or they perform complex calculations. In this work, we developed techniques to automatically predict metamorphic relations by analyzing the program structure. These metamorphic relations can serve as automated partial test oracles in scientific software. Metamorphic testing is a method for automating the testing process for programs without test oracles. This technique operates by checking whether a program behaves according to a certain set of properties called metamorphic relations. A metamorphic relation is a relationship between multiple input and output pairs of the program. It specifies how the output should change following a specific change made to the input. A change in the output that differs from what is specified by the metamorphic relation indicates a fault in the program. Metamorphic testing can be effective in testing machine learning applications, bioinformatics programs, health-care simulations, partial differential equations and other programs. Unfortunately, finding appropriate metamorphic relations for use in metamorphic testing remains a labor intensive task that is generally performed by a domain expert or a programmer. In this work we applied novel machine learning based approaches to automatically derive metamorphic relations. We first evaluated the effectiveness of modeling the metamorphic relation prediction problem as a binary classification problem. We found that support vector machines are the most effective binary classifiers for predicting metamorphic relations. We also found that using walk-based graph kernels for feature extraction from graph-based program representations further improves the prediction accuracy. In addition, incorporating mathematical properties of operations in the graph kernel computation improves the prediction accuracy. Further, we found that control flow information of a function are more effective than data dependency information for predicting metamorphic relations. Finally we investigated the possibility of creating multi-label classifiers that can predict multiple metamorphic relations using a single classifier. Our empirical studies show that multi-label classifiers are not effective as binary classifiers for predicting metamorphic relations. Automated testing will make the testing process faster, reduce the testing cost and make it more reliable. Automated testing requires automated test oracles. Automatically discovering metamorphic relations is an important step towards automating oracle creation. Work presented here is the first attempt towards developing automated techniques for deriving metamorphic relations. Our work contributes toward automating the testing process of programs that face oracle problems

    Automatic System Testing of Programs without Test Oracles

    Get PDF
    Metamorphic testing has been shown to be a simple yet effective technique in addressing the quality assurance of applications that do not have test oracles, i.e., for which it is difficult or impossible to know what the correct output should be for arbitrary input. In metamorphic testing, existing test case input is modified to produce new test cases in such a manner that, when given the new input, the application should produce an output that can be easily be computed based on the original output. That is, if input x produces output f (x), then we create input x' such that we can predict f (x') based on f(x); if the application does not produce the expected output, then a defect must exist, and either f (x) or f (x') (or both) is wrong. In practice, however, metamorphic testing can be a manually intensive technique for all but the simplest cases. The transformation of input data can be laborious for large data sets, or practically impossible for input that is not in human-readable format. Similarly, comparing the outputs can be error-prone for large result sets, especially when slight variations in the results are not actually indicative of errors (i.e., are false positives), for instance when there is non-determinism in the application and multiple outputs can be considered correct. In this paper, we present an approach called Automated Metamorphic System Testing. This involves the automation of metamorphic testing at the system level by checking that the metamorphic properties of the entire application hold after its execution. The tester is able to easily set up and conduct metamorphic tests with little manual intervention, and testing can continue in the field with minimal impact on the user. Additionally, we present an approach called Heuristic Metamorphic Testing which seeks to reduce false positives and address some cases of non-determinism. We also describe an implementation framework called Amsterdam, and present the results of empirical studies in which we demonstrate the effectiveness of the technique on real-world programs without test oracles
    corecore