9 research outputs found

    A Divergence-Oriented Approach to Adaptive Random Testing of Java Programs

    Full text link
    Abstract—Adaptive Random Testing (ART) is a testing technique which is based on an observation that a test input usually has the same potential as its neighbors in detection of a specific program defect. ART helps to improve the efficiency of random testing in that test inputs are selected evenly across the input spaces. However, the application of ART to object-oriented programs (e.g., C++ and Java) still faces a strong challenge in that the input spaces of object-oriented programs are usually high dimensional, and therefore an even distribution of test inputs in a space as such is difficult to achieve. In this paper, we propose a divergence-oriented approach to adaptive random testing of Java programs to address this challenge. The essential idea of this approach is to prepare for the tested program a pool of test inputs each of which is of significant difference from the others, and then to use the ART technique to select test inputs from the pool for the tested program. We also develop a tool called ARTGen to support this testing approach, and conduct experiment to test several popular open-source Java packages to assess the effectiveness of the approach. The experimental result shows that our approach can generate test cases with high quality. I

    Adaptive random testing by exclusion through test profile

    Get PDF
    One major objective of software testing is to reveal software failures such that program bugs can be removed. Random testing is a basic and simple software testing technique, but its failure-detection effectiveness is often controversial. Based on the common observation that program inputs causing software failures tend to cluster into contiguous regions, some researchers have proposed that an even spread of test cases should enhance the failure-detection effectiveness of random testing. Adaptive random testing refers to a family of algorithms to evenly spread random test cases based on various notions. Restricted random testing, an algorithm to implement adaptive random testing by the notion of exclusion, defines an exclusion region around each previously executed test case, and selects test cases only from outside all exclusion regions. Although having a high failure-detection effectiveness, restricted random testing has a very high computation overhead, and it rigidly discards all test cases inside any exclusion region, some of which may reveal software failures. In this paper, we propose a new method to implement adaptive random testing by exclusion, where test cases are simply selected based on a well-designed test profile. The new method has a low computation overhead and it does not omit any possible program inputs that can detect failures. Our experimental results show that the new method not only spreads test cases more evenly but also brings a higher failure-detection effectiveness than random testing

    Code coverage of adaptive random testing

    Get PDF
    Random testing is a basic software testing technique that can be used to assess the software reliability as well as to detect software failures. Adaptive random testing has been proposed to enhance the failure-detection capability of random testing. Previous studies have shown that adaptive random testing can use fewer test cases than random testing to detect the first software failure. In this paper, we evaluate and compare the performance of adaptive random testing and random testing from another perspective, that of code coverage. As shown in various investigations, a higher code coverage not only brings a higher failure-detection capability, but also improves the effectiveness of software reliability estimation. We conduct a series of experiments based on two categories of code coverage criteria: structure-based coverage, and fault-based coverage. Adaptive random testing can achieve higher code coverage than random testing with the same number of test cases. Our experimental results imply that, in addition to having a better failure-detection capability than random testing, adaptive random testing also delivers a higher effectiveness in assessing software reliability, and a higher confidence in the reliability of the software under test even when no failure is detected

    An application of adaptive random sequence in test case prioritization

    Get PDF
    Test case prioritization aims to schedule test cases in a certain order such that the effectiveness of regression testing can be improved. Prioritization using random sequence is a basic and simple technique, and normally acts as a benchmark to evaluate other prioritization techniques. Adaptive Random Sequence (ARS) makes use of extra information to improve the diversity of random sequence. Some researchers have proposed prioritization techniques using ARS with white-box code coverage information that is normally related to the test execution history of previous versions. In this paper, we propose several ARS-based prioritization techniques using black-box information. The proposed techniques schedule test cases based on the string distances of the input data, without referring to the execution history. Our experimental studies show that these new techniques deliver higher fault-detection effectiveness than random prioritization. In addition, as compared with an existing blackbox prioritization technique, the new techniques have similar fault-detection effectiveness but much lower computation overhead, and thus are more cost-effective

    Code coverage of adaptive random testing

    Get PDF
    Random testing is a basic software testing technique that can be used to assess the software reliability as well as to detect software failures. Adaptive random testing has been proposed to enhance the failure-detection capability of random testing. Previous studies have shown that adaptive random testing can use fewer test cases than random testing to detect the first software failure. In this paper, we evaluate and compare the performance of adaptive random testing and random testing from another perspective, that of code coverage. As shown in various investigations, a higher code coverage not only brings a higher failure-detection capability, but also improves the effectiveness of software reliability estimation. We conduct a series of experiments based on two categories of code coverage criteria: structure-based coverage, and fault-based coverage. Adaptive random testing can achieve higher code coverage than random testing with the same number of test cases. Our experimental results imply that, in addition to having a better failure-detection capability than random testing, adaptive random testing also delivers a higher effectiveness in assessing software reliability, and a higher confidence in the reliability of the software under test even when no failure is detected

    Code Coverage of Adaptive Random Testing

    Full text link

    Testigenerointityökalut Java-sovelluskehityksen tukena

    Get PDF
    Testien rooli sovelluskehityksessä on suuri, mutta eri syistä sovelluskehittäjien voi olla vaikeaa kirjoittaa relevantteja ja hyödyllisiä testejä. Testigenerointia voidaan käyttää sovelluskehityksen tukena, joko etsimässä aikaisemmin paljastumattomia virheitä tai kuvaamassa kohteena olevan ohjelmiston toimintaa tarkemmin. Testigeneroinnille on Java-kehityksen kontekstissa erilaisia työkaluja, joista tässä tutkielmassa arvioidaan kaupallisia Agitar Technologies-yrityksen AgitarOnea ja Parasoftin Jtestiä ja avoimia työkaluja, CodePro Analytixiä, Palusta, Randoopia ja EvoSuitea. Tutkielman tavoitteena on vastata kysymykseen, saavutetaanko kaupallisilla työkaluilla merkittävää etua verrattuna avoimien työkalujen käyttöön. Tutkielmassa osoitetaan, että AgitarOnella ja Jtestillä on tiettyjä osa-alueita, joissa ne ovat selkeästi parempia kuin avoimet työkalut, mutta myös, että niiden generoimia testejä on rajoitettu, eikä niitä voi hyödyntää yhtä vapaasti kuin avoimien työkalujen generoimia testejä. Tutkielman lopuksi todetaan myös, miten arvioidut sovelluskehittimet tukevat virheiden etsintää myös muilla merkittävillä tavoilla, eikä pelkkä testigenerointi anna koko kuvaa niiden tarjoamasta hyödystä

    Evaluating Software Testing Techniques: A Systematic Mapping Study

    Get PDF
    Software testing techniques are crucial for detecting faults in software and reducing the risk of using it. As such, it is important that we have a good understanding of how to evaluate these techniques for their efficiency, scalability, applicability, and effectiveness at finding faults. This thesis enhances our understanding of testing technique evaluations by providing an overview of the state of the art in research. To accomplish this we utilize a systematic mapping study; structuring the field and identifying research gaps and publication trends. We then present a small case study demonstrating how our mapping study can be used to assist researchers in evaluating their own software testing techniques. We find that a majority of evaluations are empirical evaluations in the form of case studies and experiments, most of these evaluations are of low quality based on proper methodology guidelines, and that relatively few papers in the field discuss how testing techniques should be evaluated
    corecore