5,830 research outputs found

    Utilizing Output in Web Application Server-Side Testing

    Get PDF
    This thesis investigates the utilization of web application output in enhancing automated server-side code testing. The server-side code is the main driving force of a web application generating client-side code, maintaining the state and communicating with back-end resources. The output observed in those elements provides a valuable resource that can potentially enhance the efficiency and effectiveness of automated testing. The thesis aims to explore the use of this output in test data generation, test sequence regeneration, augmentation and test case selection. This thesis also addresses the web-specific challenges faced when applying search based test data generation algorithms to web applications and dataflow analysis of state variables to test sequence regeneration. The thesis presents three tools and four empirical studies to implement and evaluate the proposed approaches: SWAT (Search based Web Application Tester) is a first application of search based test data generation algorithms for web applications. It uses values dynamically mined from the intermediate and the client-side output to enhance the search based algorithm. SART (State Aware Regeneration Tool) uses dataflow analysis of state variables, session state and database tables, and their values to regenerate new sequences from existing sequences. SWAT-U (SWAT-Uniqueness) augments test suites with test cases that produce outputs not observed in the original test suite’s output. Finally, the thesis presents an empirical study of the correlation between new output based test selection criteria and fault detection and structural coverage. The results confirm that using the output does indeed enhance the effectiveness and efficiency of search based test data generation and enhances test suites’ effectiveness for test sequence regeneration and augmentation. The results also report that output uniqueness criteria are strongly correlated with both fault detection and structural coverage and are complementary to structural coverage

    Test Set Diameter: Quantifying the Diversity of Sets of Test Cases

    Full text link
    A common and natural intuition among software testers is that test cases need to differ if a software system is to be tested properly and its quality ensured. Consequently, much research has gone into formulating distance measures for how test cases, their inputs and/or their outputs differ. However, common to these proposals is that they are data type specific and/or calculate the diversity only between pairs of test inputs, traces or outputs. We propose a new metric to measure the diversity of sets of tests: the test set diameter (TSDm). It extends our earlier, pairwise test diversity metrics based on recent advances in information theory regarding the calculation of the normalized compression distance (NCD) for multisets. An advantage is that TSDm can be applied regardless of data type and on any test-related information, not only the test inputs. A downside is the increased computational time compared to competing approaches. Our experiments on four different systems show that the test set diameter can help select test sets with higher structural and fault coverage than random selection even when only applied to test inputs. This can enable early test design and selection, prior to even having a software system to test, and complement other types of test automation and analysis. We argue that this quantification of test set diversity creates a number of opportunities to better understand software quality and provides practical ways to increase it.Comment: In submissio

    Output sampling for output diversity in automatic unit test generation

    Get PDF
    Diverse test sets are able to expose bugs that test sets generated with structural coverage techniques cannot discover. Input-diverse test set generators have been shown to be effective for this, but also have limitations: e.g., they need to be complemented with semantic information derived from the Software Under Test. We demonstrate how to drive the test set generation process with semantic information in the form of output diversity. We present the first totally automatic output sampling for output diversity unit test set generation tool, called OutGen. OutGen transforms a program into an SMT formula in bit-vector arithmetic. It then applies universal hashing in order to generate an output-based diverse set of inputs. The result offers significant diversity improvements when measured as a high output uniqueness count. It achieves this by ensuring that the test set’s output probability distribution is uniform, i.e. highly diverse. The use of output sampling, as opposed to any of input sampling, CBMC, CAVM, behaviour diversity or random testing improves mutation score and bug detection by up to 4150% and 963% respectively on programs drawn from three different corpora: the R-project, SIR and CodeFlaws. OutGen test sets achieve an average mutation score of up to 92%, and 70% of the test sets detect the defect. Moreover, OutGen is the only automatic unit test generation tool that is able to detect bugs on the real number C functions from the R-project

    Hashing fuzzing: introducing input diversity to improve crash detection

    Get PDF
    The utility of a test set of program inputs is strongly influenced by its diversity and its size. Syntax coverage has become a standard proxy for diversity. Although more sophisticated measures exist, such as proximity of a sample to a uniform distribution, methods to use them tend to be type dependent. We use r-wise hash functions to create a novel, semantics preserving, testability transformation for C programs that we call HashFuzz. Use of HashFuzz improves the diversity of test sets produced by instrumentation-based fuzzers. We evaluate the effect of the HashFuzz transformation on eight programs from the Google Fuzzer Test Suite using four state-of-the-art fuzzers that have been widely used in previous research. We demonstrate pronounced improvements in the performance of the test sets for the transformed programs across all the fuzzers that we used. These include strong improvements in diversity in every case, maintenance or small improvement in branch coverage – up to 4.8% improvement in the best case, and significant improvement in unique crash detection numbers – between 28% to 97% increases compared to test sets for untransformed program

    Hashing Fuzzing: Introducing Input Diversity to Improve Crash Detection

    Get PDF
    The utility of a test set of program inputs is strongly influenced by its diversity and its size. Syntax coverage has become a standard proxy for diversity. Although more sophisticated measures exist, such as proximity of a sample to a uniform distribution, methods to use them tend to be type dependent. We use r-wise hash functions to create a novel, semantics preserving, testability transformation for C programs that we call HashFuzz. Use of HashFuzz improves the diversity of test sets produced by instrumentation-based fuzzers. We evaluate the effect of the HashFuzz transformation on eight programs from the Google Fuzzer Test Suite using four state-of-the-art fuzzers that have been widely used in previous research. We demonstrate pronounced improvements in the performance of the test sets for the transformed programs across all the fuzzers that we used. These include strong improvements in diversity in every case, maintenance or small improvement in branch coverage -- up to 4.8% improvement in the best case, and significant improvement in unique crash detection numbers -- between 28% to 97% increases compared to test sets for untransformed programs

    Test Set Diameter: Quantifying the Diversity of Sets of Test Cases

    Get PDF
    A common and natural intuition among software testers is that test cases need to differ if a software system is to be tested properly and its quality ensured. Consequently, much research has gone into formulating distance measures for how test cases, their inputs and/or their outputs differ. However, common to these proposals is that they are data type specific and/or calculate the diversity only between pairs of test inputs, traces or outputs. We propose a new metric to measure the diversity of sets of tests: the test set diameter (TSDm). It extends our earlier, pairwise test diversity metrics based on recent advances in information theory regarding the calculation of the normalized compression distance (NCD) for multisets. A key advantage is that TSDm is a universal measure of diversity and so can be applied to any test set regardless of data type of the test inputs (and, moreover, to other test-related data such as execution traces). But this universality comes at the cost of greater computational effort compared to competing approaches. Our experiments on four different systems show that the test set diameter can help select test sets with higher structural and fault coverage than random selection even when only applied to test inputs. This can enable early test design and selection, prior to even having a software system to test, and complement other types of test automation and analysis. We argue that this quantification of test set diversity creates a number of opportunities to better understand software quality and provides practical ways to increase it

    DSpot: Test Amplification for Automatic Assessment of Computational Diversity

    Full text link
    Context: Computational diversity, i.e., the presence of a set of programs that all perform compatible services but that exhibit behavioral differences under certain conditions, is essential for fault tolerance and security. Objective: We aim at proposing an approach for automatically assessing the presence of computational diversity. In this work, computationally diverse variants are defined as (i) sharing the same API, (ii) behaving the same according to an input-output based specification (a test-suite) and (iii) exhibiting observable differences when they run outside the specified input space. Method: Our technique relies on test amplification. We propose source code transformations on test cases to explore the input domain and systematically sense the observation domain. We quantify computational diversity as the dissimilarity between observations on inputs that are outside the specified domain. Results: We run our experiments on 472 variants of 7 classes from open-source, large and thoroughly tested Java classes. Our test amplification multiplies by ten the number of input points in the test suite and is effective at detecting software diversity. Conclusion: The key insights of this study are: the systematic exploration of the observable output space of a class provides new insights about its degree of encapsulation; the behavioral diversity that we observe originates from areas of the code that are characterized by their flexibility (caching, checking, formatting, etc.).Comment: 12 page

    Learning How to Search: Generating Exception-Triggering Tests Through Adaptive Fitness Function Selection

    Get PDF
    Search-based test generation is guided by feedback from one or more fitness functions—scoring functions that judge solution optimality. Choosing informative fitness functions is crucial to meeting the goals of a tester. Unfortunately, many goals—such as forcing the class-under-test to throw exceptions— do not have a known fitness function formulation. We propose that meeting such goals requires treating fitness function identification as a secondary optimization step. An adaptive algorithm that can vary the selection of fitness functions could adjust its selection throughout the generation process to maximize goal attainment, based on the current population of test suites. To test this hypothesis, we have implemented two reinforcement learning algorithms in the EvoSuite framework, and used these algorithms to dynamically set the fitness functions used during generation.We have evaluated our framework, EvoSuiteFIT, on a set of 386 real faults. EvoSuiteFIT discovers and retains more exception-triggering input and produces suites that detect a variety of faults missed by the other techniques. The ability to adjust fitness functions allows EvoSuiteFIT to make strategic choices that efficiently produce more effective test suites

    Output sampling for output diversity in automatic unit test generation

    Get PDF
    Diverse test sets are able to expose bugs that test sets generated with structural coverage techniques cannot discover. Input-diverse test set generators have been shown to be effective for this, but also have limitations: e.g., they need to be complemented with semantic information derived from the Software Under Test. We demonstrate how to drive the test set generation process with semantic information in the form of output diversity. We present the first totally automatic output sampling for output diversity unit test set generation tool, called OutGen. OutGen transforms a program into an SMT formula in bit-vector arithmetic. It then applies universal hashing in order to generate an output-based diverse set of inputs. The result offers significant diversity improvements when measured as a high output uniqueness count. It achieves this by ensuring that the test set’s output probability distribution is uniform, i.e. highly diverse. The use of output sampling, as opposed to any of input sampling, CBMC, CAVM, behaviour diversity or random testing improves mutation score and bug detection by up to 4150% and 963% respectively on programs drawn from three different corpora: the R-project, SIR and CodeFlaws. OutGen test sets achieve an average mutation score of up to 92%, and 70% of the test sets detect the defect. Moreover, OutGen is the only automatic unit test generation tool that is able to detect bugs on the real number C functions from the R-project
    • …
    corecore