515 research outputs found

    Automatically generating complex test cases from simple ones

    Get PDF
    While source code expresses and implements design considerations for software system, test cases capture and represent the domain knowledge of software developer, her assumptions on the implicit and explicit interaction protocols in the system, and the expected behavior of different modules of the system in normal and exceptional conditions. Moreover, test cases capture information about the environment and the data the system operates on. As such, together with the system source code, test cases integrate important system and domain knowledge. Besides being an important project artifact, test cases embody up to the half the overall software development cost and effort. Software projects produce many test cases of different kind and granularity to thoroughly check the system functionality, aiming to prevent, detect, and remove different types of faults. Simple test cases exercise small parts of the system aiming to detect faults in single modules. More complex integration and system test cases exercise larger parts of the system aiming to detect problems in module interactions and verify the functionality of the system as a whole. Not surprisingly, the test case complexity comes at a cost -- developing complex test cases is a laborious and expensive task that is hard to automate. Our intuition is that important information that is naturally present in test cases can be reused to reduce the effort in generation of new test cases. This thesis develops this intuition and investigates the phenomenon of information reuse among test cases. We first empirically investigated many test cases from real software projects and demonstrated that test cases of different granularity indeed share code fragments and build upon each other. Then we proposed an approach for automatically generating complex test cases by extracting and exploiting information in existing simple ones. In particular, our approach automatically generates integration test cases from unit ones. We implemented our approach in a prototype to evaluate its ability to generate new and useful test cases for real software systems. Our studies show that test cases generated with our approach reveal new interaction faults even in well tested applications. We evaluated the effectiveness of our approach by comparing it with the state of the art test generation techniques. The evaluation results show that our approach is effective, it finds relevant faults differently from other approaches that tend to find different and usually less relevant faults

    JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction

    Get PDF
    Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally

    Improving Automated Software Testing while re-engineering legacy systems in the absence of documentation

    Get PDF
    Legacy software systems are essential assets that contain an organizations' valuable business logic. Because of outdated technologies and methods used in these systems, they are challenging to maintain and expand. Therefore, organizations need to decide whether to redevelop or re-engineer the legacy system. Although in most cases, re-engineering is the safer and less expensive choice, it has risks such as failure to meet the expected quality and delays due to testing blockades. These risks are even more severe when the legacy system does not have adequate documentation. A comprehensive testing strategy, which includes automated tests and reliable test cases, can substantially reduce the risks. To mitigate the hazards associated with re-engineering, we have conducted three studies in this thesis to improve the testing process. Our rst study introduces a new testing model for the re-engineering process and investigates test automation solutions to detect defects in the early re-engineering stages. We implemented this model on the Cold Region Hydrological Model (CRHM) application and discovered bugs that would not likely have been found manually. Although this approach helped us discover great numbers of software defects, designing test cases is very time-consuming due to the lack of documentation, especially for large systems. Therefore, in our second study, we investigated an approach to generate test cases from user footprints automatically. To do this, we extended an existing tool to collect user actions and legacy system reactions, including database and le system changes. Then we analyzed the data based on the order of user actions and time of them and generated human-readable test cases. Our evaluation shows that this approach can detect more bugs than other existing tools. Moreover, the test cases generated using this approach contain detailed oracles that make them suitable for both black-box and white-box testing. Many scienti c legacy systems such as CRHM are data-driven; they take large amounts of data as input and produce massive data after applying mathematical models. Applying test cases and nding bugs is more demanding when we are dealing with large amounts of data. Hence in our third study, we created a comparative visualization tool (ComVis) to compare a legacy system's output after each change. Visualization helps testers to nd data issues resulting from newly introduced bugs. Twenty participants took part in a user study in which they were asked to nd data issued using ComVis and embedded CRHM visualization tool. Our user study shows that ComVis can nd 51% more data issues than embedded visualization tools in the legacy system can. Also, results from the NASA-TLX assessment and thematic analysis of open-ended questions about each task show users prefer to use ComVis over the built-in visualization tool. We believe our introduced approaches and developed systems will signi cantly reduce the risks associated with the re-engineering process. i

    The Oracle Problem in Software Testing: A Survey

    Get PDF
    Testing involves examining the behaviour of a system in order to discover potential faults. Given an input for a system, the challenge of distinguishing the corresponding desired, correct behaviour from potentially incorrect behavior is called the “test oracle problem”. Test oracle automation is important to remove a current bottleneck that inhibits greater overall test automation. Without test oracle automation, the human has to determine whether observed behaviour is correct. The literature on test oracles has introduced techniques for oracle automation, including modelling, specifications, contract-driven development and metamorphic testing. When none of these is completely adequate, the final source of test oracle information remains the human, who may be aware of informal specifications, expectations, norms and domain specific information that provide informal oracle guidance. All forms of test oracles, even the humble human, involve challenges of reducing cost and increasing benefit. This paper provides a comprehensive survey of current approaches to the test oracle problem and an analysis of trends in this important area of software testing research and practice

    Automating test oracles generation

    Get PDF
    Software systems play a more and more important role in our everyday life. Many relevant human activities nowadays involve the execution of a piece of software. Software has to be reliable to deliver the expected behavior, and assessing the quality of software is of primary importance to reduce the risk of runtime errors. Software testing is the most common quality assessing technique for software. Testing consists in running the system under test on a finite set of inputs, and checking the correctness of the results. Thoroughly testing a software system is expensive and requires a lot of manual work to define test inputs (stimuli used to trigger different software behaviors) and test oracles (the decision procedures checking the correctness of the results). Researchers have addressed the cost of testing by proposing techniques to automatically generate test inputs. While the generation of test inputs is well supported, there is no way to generate cost-effective test oracles: Existing techniques to produce test oracles are either too expensive to be applied in practice, or produce oracles with limited effectiveness that can only identify blatant failures like system crashes. Our intuition is that cost-effective test oracles can be generated using information produced as a byproduct of the normal development activities. The goal of this thesis is to create test oracles that can detect faults leading to semantic and non-trivial errors, and that are characterized by a reasonable generation cost. We propose two ways to generate test oracles, one derives oracles from the software redundancy and the other from the natural language comments that document the source code of software systems. We present a technique that exploits redundant sequences of method calls encoding the software redundancy to automatically generate test oracles named CCOracles. We describe how CCOracles are automatically generated, deployed, and executed. We prove the effectiveness of CCOracles by measuring their fault-finding effectiveness when combined with both automatically generated and hand-written test inputs. We also present Toradocu, a technique that derives executable specifications from Javadoc comments of Java constructors and methods. From such specifications, Toradocu generates test oracles that are then deployed into existing test suites to assess the outputs of given test inputs. We empirically evaluate Toradocu, showing that Toradocu accurately translates Javadoc comments into procedure specifications. We also show that Toradocu oracles effectively identify semantic faults in the SUT. CCOracles and Toradocu oracles stem from independent information sources and are complementary in the sense that they check different aspects of the system undertest

    Software Engineering for Science

    Get PDF
    Software Engineering for Science provides an in-depth collection of peer-reviewed chapters that describe experiences with applying software engineering practices to the development of scientific software. It provides a better understanding of how software engineering is and should be practiced, and which software engineering practices are effective for scientific software. The book starts with a detailed overview of the Scientific Software Lifecycle, and a general overview of the scientific software development process. It highlights key issues commonly arising during scientific software development, as well as solutions to these problems. The second part of the book provides examples of the use of testing in scientific software development, including key issues and challenges. The chapters then describe solutions and case studies aimed at applying testing to scientific software development efforts. The final part of the book provides examples of applying software engineering techniques to scientific software, including not only computational modeling, but also software for data management and analysis. The authors describe their experiences and lessons learned from developing complex scientific software in different domains

    Web browsing automation for applications quality control

    Get PDF
    Context: Quality control comprises the set of activities aimed to evaluate that software meets its specification and delivers the functionality expected by the consumers. These activities are often removed in the development process and, as a result, the final software product usually lacks quality. Objective: We propose a set of techniques to automate the quality control for web applications from the client-side, guiding the process by functional and nonfunctional requirements (performance, security, compatibility, usability and accessibility). Method: The first step to achieve automation is to define the structure of the web navigation. Existing software artifacts in the phase of analysis and design are reused. Then, the independent paths of navigation are found, and each path is traversed automatically using real browsers while different kinds of assessments are carried out. Results: The processes and methods proposed in this paper have been implemented by means of a reference architecture and open source tools. A laboratory experiment and an industrial case study have been performed in order to validate the proposal. Conclusion: The definition of navigation paths is a rich approach to model web applications. Grey-box (black-box and white-box) methods have been proved to be very valuable for web assessment. The Chinese Postman Problem (CPP) is an optimal way to find the independent paths in a web navigation modeled as a directed graph

    A Survey on Metamorphic Testing

    Full text link
    • …
    corecore