282,223 research outputs found

    Systematic literature review on search based software testing

    Get PDF
    The use of random search is very poor at finding solutions when those solutions occupy a very small part of the overall search space. Test data may be found faster and more reliably if the search is given some guidance. This work is a paper that explains the application of metaheuristic techniques in search-based software testing. The paper systematically review 47 papers selected randomly from online databases and conference proceeding based on the metaheuristic search techniques that have been most widely applied to problem solving, the different fitness function used for test data selection in each of the metaheuristic technique, and the limitation in the use of each search-based technique for software testing. It was found that GA outperformed its counterparts SA, HC, GP and random search approaches in generating test data automatically, different approaches were used to make sure that test data are selected within shorter period of time and also with wider coverage of the paths based on the fitness function, and most of the limitations of the articles are the handling of complex data types, like array, object types, and branch coverage. The paper also provides areas of possible future work on the use of metaheuristic techniques in search-based software testing

    The effectiveness of an enhanced weighted method with a unique priority value for test case prioritization in regression testing

    Get PDF
    Regression testing is an important and costly strategy in software testing. To decrease its cost, many methods have been proposed including the test case prioritization methods. The aim of the prioritization methods is to define an ideal order of test cases that allows for higher coverage and early fault detection with minimal number of executed test cases. However, the problem with most of the existing test case prioritization methods is the random sorting of test cases when two or more test cases record equal priority values. In this paper, the effectiveness of an enhanced weighted method using a unique priority value, UniVal, is proposed. Unival prioritizes test cases based on code coverage criteria with information from history of previous executions. In addition, a controlled experiment was executed, and the results were statistically analyzed to assess the effectiveness of the proposed method. The results indicates better performance with regard to prioritize test cases and achieve higher APFD values

    Capture-based Automated Test Input Generation

    Get PDF
    Testing object-oriented software is critical because object-oriented languages have been commonly used in developing modern software systems. Many efficient test input generation techniques for object-oriented software have been proposed; however, state-of-the-art algorithms yield very low code coverage (e.g., less than 50%) on large-scale software. Therefore, one important and yet challenging problem is to generate desirable input objects for receivers and arguments that can achieve high code coverage (such as branch coverage) or help reveal bugs. Desirable objects help tests exercise the new parts of the code. However, generating desirable objects has been a significant challenge for automated test input generation tools, partly because the search space for such desirable objects is huge. To address this significant challenge, we propose a novel approach called Capture-based Automated Test Input Generation for Objected-Oriented Unit Testing (CAPTIG). The contributions of this proposed research are the following. First, CAPTIG enhances method-sequence generation techniques. Our approach intro-duces a set of new algorithms for guided input and method selection that increase code coverage. In addition, CAPTIG efficently reduces the amount of generated input. Second, CAPTIG captures objects dynamically from program execution during either system testing or real use. These captured inputs can support existing automated test input generation tools, such as a random testing tool called Randoop, to achieve higher code coverage. Third, CAPTIG statically analyzes the observed branches that had not been covered and attempts to exercise them by mutating existing inputs, based on the weakest precon-dition analysis. This technique also contributes to achieve higher code coverage. Fourth, CAPTIG can be used to reproduce software crashes, based on crash stack trace. This feature can considerably reduce cost for analyzing and removing causes of the crashes. In addition, each CAPTIG technique can be independently applied to leverage existing testing techniques. We anticipate our approach can achieve higher code coverage with a reduced duration of time with smaller amount of test input. To evaluate this new approach, we performed experiments with well-known large-scale open-source software and discovered our approach can help achieve higher code coverage with fewer amounts of time and test inputs

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours
    corecore