2,728 research outputs found

    Piping classification to metamorphic testing: an empirical study towards better effectiveness for the identification of failures in mesh simplification programs

    Get PDF
    Mesh simplification is a mainstream technique to render graphics responsively in modern graphical software. However, the graphical nature of the output poses a test oracle problem in testing. Previous work uses pattern classification to identify failures. Although such an approach may be promising, it may conservatively mark the test result of a failure-causing test case as passed. This paper proposes a methodology that pipes the test cases marked as passed by the pattern classification component to a metamorphic testing component to look for missed failures. The empirical study uses three simple and general metamorphic relations as subjects, and the experimental results show a 10 percent improvement of effectiveness in the identification of failures. © 2007 IEEE.Link_to_subscribed_fulltextThis research is supported in part by a grant of the Research Grants Council of Hong Kong (project no. 714504), a grant of City University of Hong Kong (project no. 200079), and a grant of The University of Hong Kong

    A Survey on Metamorphic Testing

    Full text link

    The Integration of Machine Learning into Automated Test Generation: A Systematic Mapping Study

    Get PDF
    Context: Machine learning (ML) may enable effective automated test generation. Objective: We characterize emerging research, examining testing practices, researcher goals, ML techniques applied, evaluation, and challenges. Methods: We perform a systematic mapping on a sample of 102 publications. Results: ML generates input for system, GUI, unit, performance, and combinatorial testing or improves the performance of existing generation methods. ML is also used to generate test verdicts, property-based, and expected output oracles. Supervised learning - often based on neural networks - and reinforcement learning - often based on Q-learning - are common, and some publications also employ unsupervised or semi-supervised learning. (Semi-/Un-)Supervised approaches are evaluated using both traditional testing metrics and ML-related metrics (e.g., accuracy), while reinforcement learning is often evaluated using testing metrics tied to the reward function. Conclusion: Work-to-date shows great promise, but there are open challenges regarding training data, retraining, scalability, evaluation complexity, ML algorithms employed - and how they are applied - benchmarks, and replicability. Our findings can serve as a roadmap and inspiration for researchers in this field.Comment: Under submission to Software Testing, Verification, and Reliability journal. (arXiv admin note: text overlap with arXiv:2107.00906 - This is an earlier study that this study extends

    Many-core compiler fuzzing

    Get PDF
    We address the compiler correctness problem for many-core systems through novel applications of fuzz testing to OpenCL compilers. Focusing on two methods from prior work, random differential testing and testing via equivalence modulo inputs (EMI), we present several strategies for random generation of deterministic, communicating OpenCL kernels, and an injection mechanism that allows EMI testing to be applied to kernels that otherwise exhibit little or no dynamically-dead code. We use these methods to conduct a large, controlled testing campaign with respect to 21 OpenCL (device, compiler) configurations, covering a range of CPU, GPU, accelerator, FPGA and emulator implementations. Our study provides independent validation of claims in prior work related to the effectiveness of random differential testing and EMI testing, proposes novel methods for lifting these techniques to the many-core setting and reveals a significant number of OpenCL compiler bugs in commercial implementations

    Automated inference of likely metamorphic relations for model transformations

    Get PDF
    Model transformations play a cornerstone role in Model-Driven Engineering (MDE) as they provide the essential mechanisms for manipulating and transforming models. Checking whether the output of a model transformation is correct is a manual and errorprone task, referred to as the oracle problem. Metamorphic testing alleviates the oracle problem by exploiting the relations among di erent inputs and outputs of the program under test, so-called metamorphic relations (MRs). One of the main challenges in metamorphic testing is the automated inference of likely MRs. This paper proposes an approach to automatically infer likely MRs for ATL model transformations, where the tester does not need to have any knowledge of the transformation. The inferred MRs aim at detecting faults in model transformations in three application scenarios, namely regression testing, incremental transformations and migrations among transformation languages. In the experiments performed, the inferred likely MRs have proved to be quite accurate, with a precision of 96.4% from a total of 4101 true positives out of 4254 MRs inferred. Furthermore, they have been useful for identifying mutants in regression testing scenarios, with a mutation score of 93.3%. Finally, our approach can be used in conjunction with current approaches for the automatic generation of test cases.Comisión Interministerial de Ciencia y Tecnología TIN2015-70560-RJunta de Andalucía P12-TIC-186
    corecore