30,163 research outputs found
JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction
Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally
An empirical investigation into branch coverage for C programs using CUTE and AUSTIN
Automated test data generation has remained a topic of considerable interest for several decades because it lies at the heart of attempts to automate the process of Software Testing. This paper reports the results of an empirical study using the dynamic symbolic-execution tool. CUTE, and a search based tool, AUSTIN on five non-trivial open source applications. The aim is to provide practitioners with an assessment of what can be achieved by existing techniques with little or no specialist knowledge and to provide researchers with baseline data against which to measure subsequent work. To achieve this, each tool is applied 'as is', with neither additional tuning nor supporting harnesses and with no adjustments applied to the subject programs under test. The mere fact that these tools can be applied 'out of the box' in this manner reflects the growing maturity of Automated test data generation. However, as might be expected, the study reveals opportunities for improvement and suggests ways to hybridize these two approaches that have hitherto been developed entirely independently. (C) 2010 Elsevier Inc. All rights reserved
Dynamic data flow testing
Data flow testing is a particular form of testing that identifies data flow relations as test objectives. Data flow testing has recently attracted new interest in the context of testing object oriented systems, since data flow information is well suited to capture relations among the object states, and can thus provide useful information for testing method interactions. Unfortunately, classic data flow testing, which is based on static analysis of the source code, fails to identify many important data flow relations due to the dynamic nature of object oriented systems. This thesis presents Dynamic Data Flow Testing, a technique which rethinks data flow testing to suit the testing of modern object oriented software. Dynamic Data Flow Testing stems from empirical evidence that we collect on the limits of classic data flow testing techniques. We investigate such limits by means of Dynamic Data Flow Analysis, a dynamic implementation of data flow analysis that computes sound data flow information on program traces. We compare data flow information collected with static analysis of the code with information observed dynamically on execution traces, and empirically observe that the data flow information computed with classic analysis of the source code misses a significant part of information that corresponds to relevant behaviors that shall be tested. In view of these results, we propose Dynamic Data Flow Testing. The technique promotes the synergies between dynamic analysis, static reasoning and test case generation for automatically extending a test suite with test cases that execute the complex state based interactions between objects. Dynamic Data Flow Testing computes precise data flow information of the program with Dynamic Data Flow Analysis, processes the dynamic information to infer new test objectives, which Dynamic Data Flow Testing uses to generate new test cases. The test cases generated by Dynamic Data Flow Testing exercise relevant behaviors that are otherwise missed by both the original test suite and test suites that satisfy classic data flow criteria
Capture-based Automated Test Input Generation
Testing object-oriented software is critical because object-oriented languages have been commonly used in developing modern software systems. Many efficient test input generation techniques for object-oriented software have been proposed; however, state-of-the-art algorithms yield very low code coverage (e.g., less than 50%) on large-scale software. Therefore, one important and yet challenging problem is to generate desirable input objects for receivers and arguments that can achieve high code coverage (such as branch coverage) or help reveal bugs. Desirable objects help tests exercise the new parts of the code. However, generating desirable objects has been a significant challenge for automated test input generation tools, partly because the search space for such desirable objects is huge.
To address this significant challenge, we propose a novel approach called Capture-based Automated Test Input Generation for Objected-Oriented Unit Testing (CAPTIG). The contributions of this proposed research are the following.
First, CAPTIG enhances method-sequence generation techniques. Our approach intro-duces a set of new algorithms for guided input and method selection that increase code coverage. In addition, CAPTIG efficently reduces the amount of generated input.
Second, CAPTIG captures objects dynamically from program execution during either system testing or real use. These captured inputs can support existing automated test input generation tools, such as a random testing tool called Randoop, to achieve higher code coverage.
Third, CAPTIG statically analyzes the observed branches that had not been covered and attempts to exercise them by mutating existing inputs, based on the weakest precon-dition analysis. This technique also contributes to achieve higher code coverage.
Fourth, CAPTIG can be used to reproduce software crashes, based on crash stack trace. This feature can considerably reduce cost for analyzing and removing causes of the crashes.
In addition, each CAPTIG technique can be independently applied to leverage existing testing techniques. We anticipate our approach can achieve higher code coverage with a reduced duration of time with smaller amount of test input. To evaluate this new approach, we performed experiments with well-known large-scale open-source software and discovered our approach can help achieve higher code coverage with fewer amounts of time and test inputs
Recommended from our members
Web and knowledge-based decision support system for measurement uncertainty evaluation
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel UniversityIn metrology, measurement uncertainty is understood as a range in which the true value of the measurement is likely to fall in. The recent years have seen a rapid development in evaluation of measurement uncertainty. ISO Guide to the Expression of Uncertainty in Measurement (GUM 1995) is the primary guiding document for measurement uncertainty. More recently, the Supplement 1 to the "Guide to the expression of uncertainty in measurement" – Propagation of distributions using a Monte Carlo method (GUM SP1) was published in November 2008. A number of software tools for measurement uncertainty have been developed and made available based on these two documents. The current software tools are mainly desktop applications utilising numeric computation with limited mathematical model handling capacity. A novel and generic web-based application, web-based Knowledge-Based Decision Support System (KB-DSS), has been proposed and developed in this research for measurement uncertainty evaluation. A Model-View-Controller architecture pattern is used for the proposed system. Under this general architecture, a web-based KB-DSS is developed based on an integration of the Expert System and Decision Support System approach. In the proposed uncertainty evaluation system, three knowledge bases as sub-systems are developed to implement the evaluation for measurement uncertainty. The first sub-system, the Measurement Modelling Knowledge Base (MMKB), assists the user in establishing the appropriate mathematical model for the measurand, a critical process for uncertainty evaluation. The second sub-system, GUM Framework Knowledge Base, carries out the uncertainty evaluation process based on the GUM Uncertainty Framework using symbolic computation, whilst the third sub-system, GUM SP1 MCM Framework Knowledge Base, conducts the uncertainty calculation according to the GUM SP1 Framework numerically based on Monte Carlo Method. The design and implementation of the proposed system and sub-systems are discussed in the thesis, supported by elaboration of the implementation steps and examples. Discussions and justifications on the technologies and approaches used for the sub-systems and their components are also presented. These include Drools, Oracle database, Java, JSP, Java Transfer Object, AJAX and Matlab. The proposed web-based KB-DSS has been evaluated through case studies and the performance of the system has been validated by the example results. As an
established methodology and practical tool, the research will make valuable contributions to the field of measurement uncertainty evaluation.Brunel Universit
- …