174 research outputs found

    Test Advising Framework

    Get PDF
    Test cases are represented in various formats depending on the process, the technique or the tool used to generate the tests. While different test case representations are necessary, this diversity challenges us in comparing test cases and leveraging strengths among them - a common test representation will help. In this thesis, we define a new Test Case Language (TCL) that can be used to represent test cases that vary in structure and are generated by multiple test generation frameworks. We also present a methodology for transforming test cases of varying representations into a common format where they can be matched and analyzed. With the common representation in our test case description language, we define five advice functions to leverage the testing strength from one type of tests to improve the effectiveness of other type(s) of tests. These advice functions analyze test input values, method call sequences, or test oracles of one source test suite to derive advice, and utilize the advice to amplify the effectiveness of an original test suite. Our assessment shows that the amplified test suite derived from the advice functions has improved values in terms of code coverage and mutant kill score compared to the original test suite before the advice functions applied

    Dynamic Analysis can be Improved with Automatic Test Suite Refactoring

    Full text link
    Context: Developers design test suites to automatically verify that software meets its expected behaviors. Many dynamic analysis techniques are performed on the exploitation of execution traces from test cases. However, in practice, there is only one trace that results from the execution of one manually-written test case. Objective: In this paper, we propose a new technique of test suite refactoring, called B-Refactoring. The idea behind B-Refactoring is to split a test case into small test fragments, which cover a simpler part of the control flow to provide better support for dynamic analysis. Method: For a given dynamic analysis technique, our test suite refactoring approach monitors the execution of test cases and identifies small test cases without loss of the test ability. We apply B-Refactoring to assist two existing analysis tasks: automatic repair of if-statements bugs and automatic analysis of exception contracts. Results: Experimental results show that test suite refactoring can effectively simplify the execution traces of the test suite. Three real-world bugs that could previously not be fixed with the original test suite are fixed after applying B-Refactoring; meanwhile, exception contracts are better verified via applying B-Refactoring to original test suites. Conclusions: We conclude that applying B-Refactoring can effectively improve the purity of test cases. Existing dynamic analysis tasks can be enhanced by test suite refactoring

    Combining over- and under-approximating program analyses for automatic software testing

    Get PDF
    This dissertation attacks the well-known problem of path-imprecision in static program analysis. Our starting point is an existing static program analysis that over-approximates the execution paths of the analyzed program. We then make this over-approximating program analysis more precise for automatic testing in an object-oriented programming language. We achieve this by combining the over-approximating program analysis with usage-observing and under-approximating analyses. More specifically, we make the following contributions. We present a technique to eliminate language-level unsound bug warnings produced by an execution-path-over-approximating analysis for object-oriented programs that is based on the weakest precondition calculus. Our technique post-processes the results of the over-approximating analysis by solving the produced constraint systems and generating and executing concrete test-cases that satisfy the given constraint systems. Only test-cases that confirm the results of the over-approximating static analysis are presented to the user. This technique has the important side-benefit of making the results of a weakest-precondition based static analysis easier to understand for human consumers. We show examples from our experiments that visually demonstrate the difference between hundreds of complicated constraints and a simple corresponding JUnit test-case. Besides eliminating language-level unsound bug warnings, we present an additional technique that also addresses user-level unsound bug warnings. This technique pre-processes the testee with a dynamic analysis that takes advantage of actual user data. It annotates the testee with the knowledge obtained from this pre-processing step and thereby provides guidance for the over-approximating analysis. We also present an improvement to dynamic invariant detection for object-oriented programming languages. Previous approaches do not take behavioral subtyping into account and therefore may produce inconsistent results, which can throw off automated analyses such as the ones we are performing for bug-finding. Finally, we address the problem of unwanted dependencies between test-cases caused by global state. We present two techniques for efficiently re-initializing global state between test-case executions and discuss their trade-offs. We have implemented the above techniques in the JCrasher, Check 'n' Crash, and DSD-Crasher tools and present initial experience in using them for automated bug finding in real-world Java programs.Ph.D.Committee Chair: Smaragdakis, Yannis; Committee Member: Dwyer, Matthew; Committee Member: Orso, Alessandro; Committee Member: Pande, Santosh; Committee Member: Rugaber, Spence

    Automatically generating complex test cases from simple ones

    Get PDF
    While source code expresses and implements design considerations for software system, test cases capture and represent the domain knowledge of software developer, her assumptions on the implicit and explicit interaction protocols in the system, and the expected behavior of different modules of the system in normal and exceptional conditions. Moreover, test cases capture information about the environment and the data the system operates on. As such, together with the system source code, test cases integrate important system and domain knowledge. Besides being an important project artifact, test cases embody up to the half the overall software development cost and effort. Software projects produce many test cases of different kind and granularity to thoroughly check the system functionality, aiming to prevent, detect, and remove different types of faults. Simple test cases exercise small parts of the system aiming to detect faults in single modules. More complex integration and system test cases exercise larger parts of the system aiming to detect problems in module interactions and verify the functionality of the system as a whole. Not surprisingly, the test case complexity comes at a cost -- developing complex test cases is a laborious and expensive task that is hard to automate. Our intuition is that important information that is naturally present in test cases can be reused to reduce the effort in generation of new test cases. This thesis develops this intuition and investigates the phenomenon of information reuse among test cases. We first empirically investigated many test cases from real software projects and demonstrated that test cases of different granularity indeed share code fragments and build upon each other. Then we proposed an approach for automatically generating complex test cases by extracting and exploiting information in existing simple ones. In particular, our approach automatically generates integration test cases from unit ones. We implemented our approach in a prototype to evaluate its ability to generate new and useful test cases for real software systems. Our studies show that test cases generated with our approach reveal new interaction faults even in well tested applications. We evaluated the effectiveness of our approach by comparing it with the state of the art test generation techniques. The evaluation results show that our approach is effective, it finds relevant faults differently from other approaches that tend to find different and usually less relevant faults

    Evolution and Fragilities in Scripted GUI Testing of Android applications

    Get PDF
    In literature there is evidence that Android applications are not rigorously tested as their desktop counterparts. However – especially for what concerns the graphical User Interface of mobile apps – a thorough testing should be advisable for developers. Some peculiarities of Android applications discourage developers from performing automated testing. Among them, we recognize fragility, i.e. test classes failing because of modifications in the GUI only, without the application functionalities being modified. The aim of this study is to provide a preliminary characterization of the fragility issue for Android apps, dentifying some of its causes and estimating its frequency among Android open-source projects. We defined a set of metrics to quantify the amount of fragility of any testing suite, and measured them automatically for a set of repositories hosted on GitHub. We found that, for projects featuring GUI tests, the incidence of fragility is around 10% for test classes, and around 5% for test methods. This means that a significant effort has to be put by developers in fixing their test suites because of the occurrence of fragilities
    • …
    corecore