9 research outputs found

    Finding the needles in the haystack: Generating legal test inputs for object-oriented programs

    Get PDF
    A test input for an object-oriented program typically consists of asequence of method calls that use the API defined by the programunder test. Generating legal test inputs can be challenging because,for some programs, the set of legal method sequences is much smallerthan the set of all possible sequences; without a formalspecification of legal sequences, an input generator is bound toproduce mostly illegal sequences.We propose a scalable technique that combines dynamic analysis withrandom testing to help an input generator create legal test inputswithout a formal specification, even for programs in whichmost sequences are illegal. The technique uses an example executionof the program to infer a model of legal call sequences, and usesthe model to guide a random input generator towards legal butbehaviorally-diverse sequences.We have implemented our technique for Java, in a tool calledPalulu, and evaluated its effectiveness in creating legal inputsfor real programs. Our experimental results indicate that thetechnique is effective and scalable. Our preliminary evaluationindicates that the technique can quickly generate legal sequencesfor complex inputs: in a case study, Palulu created legal testinputs in seconds for a set of complex classes, for which it took anexpert thirty minutes to generate a single legal input

    JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction

    Get PDF
    Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally

    Testing coupling relationships in object-oriented programs

    Get PDF
    As we move toward developing object‐oriented (OO) programs, the complexity traditionally found in functions and procedures is moving to the connections among components. Different faults occur when components are integrated to form higher‐level structures that aggregate the behavior and state. Consequently, we need to place more effort on testing the connections among components. Although OO technologies provide abstraction mechanisms for building components that can then be integrated to form applications, it also adds new compositional relations that can contain faults. This paper describes techniques for analyzing and testing the polymorphic relationships that occur in OO software. The techniques adapt traditional data flow coverage criteria to consider definitions and uses among state variables of classes, particularly in the presence of inheritance, dynamic binding, and polymorphic overriding of state variables and methods. The application of these techniques can result in an increased ability to find faults and to create an overall higher quality software

    A methodology of testing high-level Petri nets

    Get PDF
    Petri nets have been extensively used in the modelling and analysis of concurrent and distributed systems. The verification and validation of Petri nets are of particular importance in the development of concurrent and distributed systems. As a complement to formal analysis techniques, testing has been proven to be effective in detecting system errors and is easy to apply. An open problem is how to test Petri nets systematically, effectively and efficiently. An approach to solve this problem is to develop test criteria so that test adequacy can be measured objectively and test cases can be generated efficiently, even automatically. In this paper, we present a methodology of testing high-level Petri nets based on our general theory of testing concurrent software systems. Four types of testing strategies are investigated, which include state-oriented testing, transition-oriented testing, flow-oriented testing and specification-oriented testing. For each strategy, a set of schemes toobserve and record testing results and a set of coverage criteria to measure test adequacy are defined. The subsumption relationships and extraction relationships among the proposed testing methods are systematically investigated and formally proved

    Extending Traditional Static Analysis Techniques to Support Development, Testing and Maintenance of Component-Based Solutions

    Get PDF
    Traditional static code analysis encompasses a mature set of techniques for helping understand and optimize programs, such as dead code elimination, program slicing, and partial evaluation (code specialization). It is well understood that compared to other program analysis techniques (e.g., dynamic analysis), static analysis techniques do a reasonable job for the cost associated with implementing them. Industry and government are moving away from more ‘traditional’ development approaches towards component-based approaches as ‘the norm.’ Component-based applications most often comprise a collection of distributed object-oriented components such as forms, code snippets, reports, modules, databases, objects, containers, and the like. These components are glued together by code typically written in a visual language. Some industrial experience shows that component-based development and the subsequent use of visual development environments, while reducing an application\u27s total development time, actually increase certain maintenance problems. This provides a motivation for using automated analysis techniques on such systems. The results of this research show that traditional static analysis techniques may not be sufficient for analyzing component-based systems. We examine closely the characteristics of a component-based system and document many of the issues that we feel make the development, analysis, testing and maintenance of such systems more difficult. By analyzing additional summary information for the components as well as any available source code for an application, we show ways in which traditional static analysis techniques may be augmented, thereby increasing the accuracy of static analysis results and ultimately making the maintenance of component-based systems a manageable task. We develop a technique to use semantic information about component properties obtained from type library and interface definition language files, and demonstrate this technique by extending a traditional unreachable code algorithm. To support more complex analysis, we then develop a technique for component developers to provide summary information about a component. This information can be integrated with several traditional static analysis techniques to analyze component-based systems more precisely. We then demonstrate the effectiveness of these techniques on several real Department of Defense (DoD) COTS component-based systems

    Dynamically fighting bugs : prevention, detection and elimination

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 147-160).This dissertation presents three test-generation techniques that are used to improve software quality. Each of our techniques targets bugs that are found by different stake-holders: developers, testers, and maintainers. We implemented and evaluated our techniques on real code. We present the design of each tool and conduct experimental evaluation of the tools with available alternatives. Developers need to prevent regression errors when they create new functionality. This dissertation presents a technique that helps developers prevent regression errors in object-oriented programs by automatically generating unit-level regression tests. Our technique generates regressions tests by using models created dynamically from example executions. In our evaluation, our technique created effective regression tests, and achieved good coverage even for programs with constrained APIs. Testers need to detect bugs in programs. This dissertation presents a technique that helps testers detect and localize bugs in web applications. Our technique automatically creates tests that expose failures by combining dynamic test generation with explicit state model checking. In our evaluation, our technique discovered hundreds of faults in real applications. Maintainers have to reproduce failing executions in order to eliminate bugs found in deployed programs. This dissertation presents a technique that helps maintainers eliminate bugs by generating tests that reproduce failing executions. Our technique automatically generates tests that reproduce the failed executions by monitoring methods and storing optimized states of method arguments.(cont.) In our evaluation, our technique reproduced failures with low overhead in real programs Analyses need to avoid unnecessary computations in order to scale. This dissertation presents a technique that helps our other techniques to scale by inferring the mutability classification of arguments. Our technique classifies mutability by combining both static analyses and a novel dynamic mutability analysis. In our evaluation, our technique efficiently and correctly classified most of the arguments for programs with more than hundred thousand lines of code.by Shay Artzi.Ph.D
    corecore