174 research outputs found

    Coevolution of Second-order-mutant

    Get PDF
    One of the obstacles that hinder the usage of mutation testing is its impracticality, two main contributors of this are a large number of mutants and a large number of test cases involves in the process. Researcher usually tries to address this problem by optimizing the mutants and the test case separately. In this research, we try to tackle both of optimizing mutant and optimizing test-case simultaneously using a coevolution optimization method. The coevolution optimization method is chosen for the mutation testing problem because the method works by optimizing multiple collections (population) of a solution. This research found that coevolution is better suited for multi-problem optimization than other single population methods (i.e. Genetic Algorithm), we also propose new indicator to determine the optimal coevolution cycle. The experiment is done to the artificial case, laboratory, and also a real case

    Evolutionary computing driven search based software testing and correction

    Get PDF
    For a given program, testing, locating the errors identified, and correcting those errors is a critical, yet expensive process. The field of Search Based Software Engineering (SBSE) addresses these phases by formulating them as search problems. This dissertation addresses these challenging problems through the use of two complimentary evolutionary computing based systems. The first one is the Fitness Guided Fault Localization (FGFL) system, which novelly uses a specification based fitness function to perform fault localization. The second is the Coevolutionary Automated Software Correction (CASC) system, which employs a variety of evolutionary computing techniques to perform testing, correction, and verification of software. In support of the real world application of these systems, a practitioner\u27s guide to fitness function design is provided. For the FGFL system, experimental results are presented that demonstrate the applicability of fitness guided fault localization to automate this important phase of software correction in general, and the potential of the FGFL system in particular. For the fitness function design guide, the performance of a guide generated fitness function is compared to that of an expert designed fitness function demonstrating the competitiveness of the guide generated fitness function. For the CASC system, results are presented that demonstrate the system\u27s abilities on a series of problems of both increasing size as well as number of bugs present. The system presented solutions more than 90% of the time for versions of the programs containing one or two bugs. Additionally, scalability results are presented for the CASC system that indicate that success rate linearly decreases with problem size and that the estimated convergence rate scales at worst linearly with problem size --Abstract, page ii

    Using Evolutionary Mutation Testing to improve the quality of test suites

    Get PDF
    Mutation testing is a method used to assess and improve the fault detection capability of a test suite by creating faulty versions, called mutants, of the system under test. Evolutionary Mutation Testing (EMT), like selective mutation or mutant sampling, was proposed to reduce the computational cost, which is a major concern when applying mutation testing. This technique implements an evolutionary algorithm to produce a reduced subset of mutants but with a high proportion of mutants that can help the tester derive new test cases (strong mutants). In this paper, we go a step further in estimating the ability of this technique to induce the generation of test cases. Instead of measuring the percentage of strong mutants within the subset of generated mutants, we compute how much the test suite is actually improved thanks to those mutants. In our experiments, we have compared the extent to which EMT and the random selection of mutants help to find missing test cases in C++ object-oriented systems. We can conclude from our results that the percentage of mutants generated with EMT is lower than with the random strategy to obtain a test suite of the same size and that the technique scales better for complex programs

    GiGAn: evolutionary mutation testing for C++ object-oriented systems

    Get PDF
    The reduction of the expenses of mutation testing should be based on well-studied cost reduction techniques to avoid biased results. Evolutionary Mutation Testing (EMT) aims at generating a reduced set of mutants by means of an evolutionary algorithm, which searches for potentially equivalent and difficult to kill mutants to help improve the test suite. However, there is little evidence of its applicability to other contexts beyond WS-BPEL compositions. This study explores its performance when applied to C++ object-oriented programs thanks to a newly developed system, GiGAn. The conducted experiments reveal that EMT shows stable behavior in all the case studies, where the best results are obtained when a low percentage of the mutants is generated. They also support previous studies of EMT when compared to random mutant selection, reinforcing its use for the goal of improving the fault detection capability of the test suite

    Bio-inspired FPGA Architecture for Self-Calibration of an Image Compression Core based on Wavelet Transforms in Embedded Systems

    Get PDF
    A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper

    DIFFERENTIAL EVOLUTION-BASED METHODS FOR NUMERICAL OPTIMIZATION

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A Multi-Level Framework for the Detection, Prioritization and Testing of Software Design Defects

    Full text link
    Large-scale software systems exhibit high complexity and become difficult to maintain. In fact, it has been reported that software cost dedicated to maintenance and evolution activities is more than 80% of the total software costs. In particular, object-oriented software systems need to follow some traditional design principles such as data abstraction, encapsulation, and modularity. However, some of these non-functional requirements can be violated by developers for many reasons such as inexperience with object-oriented design principles, deadline stress. This high cost of maintenance activities could potentially be greatly reduced by providing automatic or semi-automatic solutions to increase system‟s comprehensibility, adaptability and extensibility to avoid bad-practices. The detection of refactoring opportunities focuses on the detection of bad smells, also called antipatterns, which have been recognized as the design situations that may cause software failures indirectly. The correction of one bad smell may influence other bad smells. Thus, the order of fixing bad smells is important to reduce the effort and maximize the refactoring benefits. However, very few studies addressed the problem of finding the optimal sequence in which the refactoring opportunities, such as bad smells, should be ordered. Few other studies tried to prioritize refactoring opportunities based on the types of bad smells to determine their severity. However, the correction of severe bad smells may require a high effort which should be optimized and the relationships between the different bad smells are not considered during the prioritization process. The main goal of this research is to help software engineers to refactor large-scale systems with a minimum effort and few interactions including the detection, management and testing of refactoring opportunities. We report the results of an empirical study with an implementation of our bi-level approach. The obtained results provide evidence to support the claim that our proposal is more efficient, on average, than existing techniques based on a benchmark of 9 open source systems and 1 industrial project. We have also evaluated the relevance and usefulness of the proposed bi-level framework for software engineers to improve the quality of their systems and support the detection of transformation errors by generating efficient test cases.Ph.D.Information Systems Engineering, College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/136075/1/Dilan_Sahin_Final Dissertation.pdfDescription of Dilan_Sahin_Final Dissertation.pdf : Dissertatio
    corecore