11 research outputs found

    What Causes My Test Alarm? Automatic Cause Analysis for Test Alarms in System and Integration Testing

    Full text link
    Driven by new software development processes and testing in clouds, system and integration testing nowadays tends to produce enormous number of alarms. Such test alarms lay an almost unbearable burden on software testing engineers who have to manually analyze the causes of these alarms. The causes are critical because they decide which stakeholders are responsible to fix the bugs detected during the testing. In this paper, we present a novel approach that aims to relieve the burden by automating the procedure. Our approach, called Cause Analysis Model, exploits information retrieval techniques to efficiently infer test alarm causes based on test logs. We have developed a prototype and evaluated our tool on two industrial datasets with more than 14,000 test alarms. Experiments on the two datasets show that our tool achieves an accuracy of 58.3% and 65.8%, respectively, which outperforms the baseline algorithms by up to 13.3%. Our algorithm is also extremely efficient, spending about 0.1s per cause analysis. Due to the attractive experimental results, our industrial partner, a leading information and communication technology company in the world, has deployed the tool and it achieves an average accuracy of 72% after two months of running, nearly three times more accurate than a previous strategy based on regular expressions.Comment: 12 page

    Improved Duplicate Bug Report Identification

    Get PDF

    Piping classification to metamorphic testing: an empirical study towards better effectiveness for the identification of failures in mesh simplification programs

    Get PDF
    Mesh simplification is a mainstream technique to render graphics responsively in modern graphical software. However, the graphical nature of the output poses a test oracle problem in testing. Previous work uses pattern classification to identify failures. Although such an approach may be promising, it may conservatively mark the test result of a failure-causing test case as passed. This paper proposes a methodology that pipes the test cases marked as passed by the pattern classification component to a metamorphic testing component to look for missed failures. The empirical study uses three simple and general metamorphic relations as subjects, and the experimental results show a 10 percent improvement of effectiveness in the identification of failures. © 2007 IEEE.Link_to_subscribed_fulltextThis research is supported in part by a grant of the Research Grants Council of Hong Kong (project no. 714504), a grant of City University of Hong Kong (project no. 200079), and a grant of The University of Hong Kong

    An empirical comparison between direct and indirect test result checking approaches

    Get PDF
    The SOQUA 2006 Workshop was held in conjunction with the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering (SIGSOFT 2006/FSE-14) ACM Press, New York, NY.An oracle on software testing is a mechanism for checking whether the system under test has behaved correctly for any executions. In some situations, oracles are unavailable or too expensive to apply. This is known as the oracle problem. It is crucial to develop techniques to address it, and metamorphic testing (MT) was one of such proposals. This paper conducts a controlled experiment to investigate the cost effectiveness of using MT by 38 testers on three open-source programs. The fault detection capability and time cost of MT are compared with the popular assertion checking method. Our results show that MT is cost-efficient and has potentials for detecting more faults than the assertion checking method. Copyright 2006 ACM.preprintThis research is supported in part by a grant of the Research Grants Council of Hong Kong (project no. HKU 7145/04E), a grant of City University of Hong Kong and a grant of The University of Hong Kong

    iTree: Efficiently Discovering High-Coverage Configurations Using Interaction Trees

    Full text link

    Finding failures from passed test cases: Improving the pattern classification approach to the testing of mesh simplification programs

    Get PDF
    Mesh simplification programs create three-dimensional polygonal models similar to an original polygonal model, and yet use fewer polygons. They produce different graphics even though they are based on the same original polygonal model. This results in a test oracle problem. To address the problem, our previous work has developed a technique that uses a reference model of the program under test to train a classifier. Using such an approach may mistakenly mark a failure-causing test case as passed. It lowers the testing effectiveness of revealing failures. This paper suggests piping the test cases marked as passed by a statistical pattern classification module to an analytical metamorphic testing (MT) module. We evaluate our approach empirically using three subject programs with over 2700 program mutants. The result shows that, using a resembling reference model to train a classifier, the integrated approach can significantly improve the failure detection effectiveness of the pattern classification approach. We also explain how MT in our design trades specificity for sensitivity. Copyright © 2009 John Wiley & Sons, Ltd.link_to_subscribed_fulltex

    Automatic, high accuracy prediction of reopened bugs

    Get PDF
    corecore