108,618 research outputs found

    Developing New Multidimensional Knapsack Heuristics Based on Empirical Analysis of Legacy Heuristics

    Get PDF
    The multidimensional knapsack problem (MKP) has been used to model a variety of practical optimization and decision-making applications. Due to its combinatorial nature, heuristics are often employed to quickly find good solutions to MKPs. While there have been a variety of heuristics proposed for the MKP, and a plethora of empirical studies comparing the performance of these heuristics, little has been done to garner a deeper understanding of heuristic performance as a function of problem structure. This dissertation presents a research methodology, empirical and theoretical results explicitly aimed at gaining a deeper understanding of heuristic procedural performance as a function of test problem characteristics. This work first employs an available, robust set of two-dimensional knapsack problems in an empirical study to garner performance insights. These performance insights are tested against a larger set of problems, five-dimensional knapsack problems specifically generated for empirical testing purposes. The performance insights are found to hold in the higher dimensions. These insights are used to formulate and test a suite of three new greedy heuristics for the MKP, each improving upon its successor. These heuristics are found to outperform available legacy heuristics across a complete spectrum of test problems. Problem reduction heuristics are examined and the subsequent performance insights garnered are used to derive a new problem reduction heuristic, which is then further extended to employ a local improvement phase. These problem reduction heuristics are also found to outperform currently available approaches. Available problem test sets are shown lacking along multiple dimensions of importance for viable empirical testing. A new problem generation methodology is developed and shown to overcome the current limitations in available problem test sets. This problem generation methodology is used to generate a new set of empirical test problems specifically designed for competitive computational tests. This new test set is shown to stress existing heuristics; not only does the computational time required by these legacy heuristics increase with problem size, but solution quality is found to decrease with problem size. However, the solution quality obtained by the suite of heuristics developed in this dissertation are shown to be unaffected by problem size thereby providing a level of robust solution quality not previously seen in heuristic development for the MKP. This research demonstrates that the test problems can have a profound, and sometimes misleading, impact on the general insights gained via empirical testing, provides six new quality heuristics, and two new robust sets of test problems, one focused on empirical testing, the other focused on competitive testing

    Evaluating Random Mutant Selection at Class-Level in Projects with Non-Adequate Test Suites

    Full text link
    Mutation testing is a standard technique to evaluate the quality of a test suite. Due to its computationally intensive nature, many approaches have been proposed to make this technique feasible in real case scenarios. Among these approaches, uniform random mutant selection has been demonstrated to be simple and promising. However, works on this area analyze mutant samples at project level mainly on projects with adequate test suites. In this paper, we fill this lack of empirical validation by analyzing random mutant selection at class level on projects with non-adequate test suites. First, we show that uniform random mutant selection underachieves the expected results. Then, we propose a new approach named weighted random mutant selection which generates more representative mutant samples. Finally, we show that representative mutant samples are larger for projects with high test adequacy.Comment: EASE 2016, Article 11 , 10 page

    Visualizing test diversity to support test optimisation

    Full text link
    Diversity has been used as an effective criteria to optimise test suites for cost-effective testing. Particularly, diversity-based (alternatively referred to as similarity-based) techniques have the benefit of being generic and applicable across different Systems Under Test (SUT), and have been used to automatically select or prioritise large sets of test cases. However, it is a challenge to feedback diversity information to developers and testers since results are typically many-dimensional. Furthermore, the generality of diversity-based approaches makes it harder to choose when and where to apply them. In this paper we address these challenges by investigating: i) what are the trade-off in using different sources of diversity (e.g., diversity of test requirements or test scripts) to optimise large test suites, and ii) how visualisation of test diversity data can assist testers for test optimisation and improvement. We perform a case study on three industrial projects and present quantitative results on the fault detection capabilities and redundancy levels of different sets of test cases. Our key result is that test similarity maps, based on pair-wise diversity calculations, helped industrial practitioners identify issues with their test repositories and decide on actions to improve. We conclude that the visualisation of diversity information can assist testers in their maintenance and optimisation activities

    LittleDarwin: a Feature-Rich and Extensible Mutation Testing Framework for Large and Complex Java Systems

    Full text link
    Mutation testing is a well-studied method for increasing the quality of a test suite. We designed LittleDarwin as a mutation testing framework able to cope with large and complex Java software systems, while still being easily extensible with new experimental components. LittleDarwin addresses two existing problems in the domain of mutation testing: having a tool able to work within an industrial setting, and yet, be open to extension for cutting edge techniques provided by academia. LittleDarwin already offers higher-order mutation, null type mutants, mutant sampling, manual mutation, and mutant subsumption analysis. There is no tool today available with all these features that is able to work with typical industrial software systems.Comment: Pre-proceedings of the 7th IPM International Conference on Fundamentals of Software Engineerin

    Time-Space Efficient Regression Testing for Configurable Systems

    Full text link
    Configurable systems are those that can be adapted from a set of options. They are prevalent and testing them is important and challenging. Existing approaches for testing configurable systems are either unsound (i.e., they can miss fault-revealing configurations) or do not scale. This paper proposes EvoSPLat, a regression testing technique for configurable systems. EvoSPLat builds on our previously-developed technique, SPLat, which explores all dynamically reachable configurations from a test. EvoSPLat is tuned for two scenarios of use in regression testing: Regression Configuration Selection (RCS) and Regression Test Selection (RTS). EvoSPLat for RCS prunes configurations (not tests) that are not impacted by changes whereas EvoSPLat for RTS prunes tests (not configurations) which are not impacted by changes. Handling both scenarios in the context of evolution is important. Experimental results show that EvoSPLat is promising. We observed a substantial reduction in time (22%) and in the number of configurations (45%) for configurable Java programs. In a case study on a large real-world configurable system (GCC), EvoSPLat reduced 35% of the running time. Comparing EvoSPLat with sampling techniques, 2-wise was the most efficient technique, but it missed two bugs whereas EvoSPLat detected all bugs four times faster than 6-wise, on average.Comment: 14 page
    • …
    corecore