25 research outputs found

    Predicting effectiveness of automatic testing tools

    No full text
    Abstract—Automatic white-box test generation is a challenging problem. Many existing tools rely on complex code analyses and heuristics. As a result, structural features of an input program may impact tool effectiveness in ways that tool users and designers may not expect or understand. We develop a technique that uses structural program metrics to predict the test coverage achieved by three automatic test generation tools. We use coverage and structural metrics extracted from 11 software projects to train several decision tree classifiers. Our experiments show that these classifiers can predict high or low coverage with success rates of 82 % to 94%. I

    Predicting and explaining automatic testing tool effectiveness,” University of Illinois at Urbana-Champaign

    No full text
    Automatic white-box test generation is a challenging problem. Many existing tools rely on complex code analyses and heuristics. As a result, structural features of an input program may impact tool effectiveness in ways that tool users and designers may not expect or understand. We develop a technique that uses structural program metrics to both predict and explain the test coverage achieved by three automatic test generation tools. We use coverage and structural metrics extracted from 11 software projects to train several decision-tree classifiers. These classifiers can predict high or low coverage with success rates of 82 % to 94%. In addition, they show tool users and designers the program structure

    Theories in Practice: Easy-to-Write Specifications that Catch Bugs

    Get PDF
    Automated testing during development helps ensure that software works according to the test suite. Traditional test suites verify a few well-picked scenarios or example inputs. However, such example-based testing does not uncover errors in legal inputs that the test writer overlooked. We propose theory-based testing as an adjunct to example-based testing. A theory generalizes a (possibly infinite) set of example-based tests. A theory is an assertion that should be true for any data, and it can be exercised by human-chosen data or by automatic data generation. A theory is expressed in an ordinary programming language, it is easy for developers to use (often even easier than example-based testing), and it serves as a lightweight form of specification. Six case studies demonstrate the utility of theories that generalize existing tests to prevent bugs, clarify intentions, and reveal design problems

    Predicting and Explaining Automatic Testing Tool Effectiveness

    Get PDF
    Automatic white-box test generation is a challenging problem. Many existing tools rely on complex code analyses and heuristics. As a result, structural features of an input program may impact tool effectiveness in ways that tool users and designers may not expect or understand. We develop a technique that uses structural program metrics to both predict and explain the test coverage achieved by three automatic test generation tools. We use coverage and structural metrics extracted from 11 software projects to train several decision-tree classifiers. These classifiers can predict high or low coverage with success rates of 82% to 94%. In addition, they show tool users and designers the program structures that impact tool effectiveness

    Aligning development tools with the way programmers think about code changes

    No full text
    Software developers must modify their programs to keep up with changing requirements and designs. Often, a conceptually simple change can require numerous edits that are similar but not identical, leading to errors and omissions. Researchers have designed programming environments to address this problem, but most of these systems are counter-intuitive and difficult to use. By applying a task-centered design process, we developed a visual tool that allows programmers to make complex code transformations in an intuitive manner. This approach uses a representation that aligns well with programmers ’ mental models of programming structures. The visual language combines textual and graphical elements and is expressive enough to support a broad range of code-changing tasks. To simplify learning the system, its user interface scaffolds construction and execution of transformations. An evaluation with Java programmers suggests that the interface is intuitive, easy to learn, and effective on a representative editing task. Author Keywords Transformations, visual languages, cognitive dimensions

    The Practice of Theories: Adding “For-all” Statements to “There-Exists ” Tests

    No full text
    Traditional unit tests in test-driven development compare a few concrete example executions against the developer’s definition of correct behavior. However, a developer knows more about how a program should behave than can be expressed through concrete examples. These general insights can be cap-tured as theories, which precisely express software properties over potentially infinite sets of values. Combining tests with theories allows developers to say what they mean, and guarantee that their code is intuitively correct, with less effort. The consistent format of theories enables automatic tools to generate or discover values that violate these properties, discovering bugs that developers didn’t think to test for

    Interactive transformation of Java programs in Eclipse

    No full text
    Implementing large and sweeping changes to software source code can be tedious and error-prone. A conceptually simple change may require a significant code editing effort. Integrating scriptable source-to-source program transformations into development environments can assist developers with this task. We present a developer-oriented interactive source code transformation tool for Java that addresses this need
    corecore