4 research outputs found

    Construction of Prioritized T-Way Test Suite Using Bi-Objective Dragonfly Algorithm

    Get PDF
    Software testing is important for ensuring the reliability of software systems. In software testing, effective test case generation is essential as an alternative to exhaustive testing. For improving the software testing technology, the t-way testing technique combined with metaheuristic algorithm has been great to analyze a large number of combinations for getting optimal solutions. However, most of the existing t-way strategies consider test case weights while generating test suites. Priority of test cases hasn’t been fully considered in previous works, but in practice, it’s frequently necessary to distinguish between high-priority and low-priority test cases. Therefore, the significance of test case prioritization is quite high. For this reason, this paper has proposed a t-way strategy that implements an adaptive Dragonfly Algorithm (DA) to construct prioritized t-way test suites. Both test case weight and test case priority have equal significance during test suite generation in this strategy. We have designed and implemented a Bi-objective Dragonfly Algorithm (BDA) for prioritized t-way test suite generation, and the two objectives are test case weight and test case priority. The test results demonstrate that BDA performs competitively against existing t-way strategies in terms of test suite size, and in addition, BDA generates prioritized test suites.©2022 Authors. Published by IEEE. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/fi=vertaisarvioitu|en=peerReviewed

    An adaptive opposition-based learning selection: the case for Jaya algorithm

    Get PDF
    Over the years, opposition-based Learning (OBL) technique has been proven to effectively enhance the convergence of meta-heuristic algorithms. The fact that OBL is able to give alternative candidate solutions in one or more opposite directions ensures good exploration and exploitation of the search space. In the last decade, many OBL techniques have been established in the literature including the Standard-OBL, General-OBL, Quasi Reflection-OBL, Centre-OBL and Optimal-OBL. Although proven useful, much existing adoption of OBL into meta-heuristic algorithms has been based on a single technique. If the search space contains many peaks with potentially many local optima, relying on a single OBL technique may not be sufficiently effective. In fact, if the peaks are close together, relying on a single OBL technique may not be able to prevent entrapment in local optima. Addressing this issue, assembling a sequence of OBL techniques into meta-heuristic algorithm can be useful to enhance the overall search performance. Based on a simple penalized and reward mechanism, the best performing OBL is rewarded to continue its execution in the next cycle, whilst poor performing one will miss cease its current turn. This paper presents a new adaptive approach of integrating more than one OBL techniques into Jaya Algorithm, termed OBL-JA. Unlike other adoptions of OBL which use one type of OBL, OBL-JA uses several OBLs and their selections will be based on each individual performance. Experimental results using the combinatorial testing problems as case study demonstrate that OBL-JA shows very competitive results against the existing works in term of the test suite size. The results also show that OBL-JA performs better than standard Jaya Algorithm in most of the tested cases due to its ability to adapt its behaviour based on the current performance feedback of the search process

    Construction of prioritized t-way test suite using Bi-objective Dragonfly Algorithm

    Get PDF
    Software testing is important for ensuring the reliability of software systems. In software testing, effective test case generation is essential as an alternative to exhaustive testing. For improving the software testing technology, the t-way testing technique combined with metaheuristic algorithm has been great to analyze a large number of combinations for getting optimal solutions. However, most of the existing t-way strategies consider test case weights while generating test suites. Priority of test cases hasn’t been fully considered in previous works, but in practice, it’s frequently necessary to distinguish between high-priority and low-priority test cases. Therefore, the significance of test case prioritization is quite high. For this reason, this paper has proposed a t-way strategy that implements an adaptive Dragonfly Algorithm (DA) to construct prioritized t-way test suites. Both test case weight and test case priority have equal significance during test suite generation in this strategy. We have designed and implemented a Bi-objective Dragonfly Algorithm (BDA) for prioritized t-way test suite generation, and the two objectives are test case weight and test case priority. The test results demonstrate that BDA performs competitively against existing t-way strategies in terms of test suite size, and in addition, BDA generates prioritized test suites

    Applications of Hyper-parameter Optimisations for Static Malware Detection

    Get PDF
    Malware detection is a major security concern and a great deal of academic and commercial research and development is directed at it. Machine Learning is a natural technology to harness for malware detection and many researchers have investigated its use. However, drawing comparisons between different techniques is a fraught affair. For example, the performance of ML algorithms often depends significantly on parametric choices, so the question arises as to what parameter choices are optimal. In this thesis, we investigate the use of a variety of ML algorithms for building malware classifiers and also how best to tune the parameters of those algorithms – a process generally known as hyper-parameter optimisation (HPO). Firstly, we examine the effects of some simple (model-free) ways of parameter tuning together with a state-of-the-art Bayesian model-building approach. We demonstrate that optimal parameter choices may differ significantly from default choices and argue that hyper-parameter optimisation should be adopted as a ‘formal outer loop’ in the research and development of malware detection systems. Secondly, we investigate the use of covering arrays (combinatorial testing) as a way to combat the curse of dimensionality in Gird Search. Four ML techniques were used: Random Forests, xgboost, Light GBM and Decision Trees. cAgen (a tool that is used for combinatorial testing) is shown to be capable of generating high-performing subsets of the full parameter grid of Grid Search and so provides a rigorous but highly efficient means of performing HPO. This may be regarded as a ‘design of experiments’ approach. Thirdly, Evolutionary algorithms (EAs) were used to enhance machine learning classifier accuracy. Six traditional machine learning techniques baseline accuracy is recorded. Two evolutionary algorithm frameworks Tree-Based Pipeline Optimization Tool (TPOT) and Distributed Evolutionary Algorithms in Python (Deap) are compared. Deap shows very promising results for our malware detection problem. Fourthly, we compare the use of Grid Search and covering arrays for tuning the hyper-parameters of Neural Networks. Several major hyper-parameters were studied with various values and results. We achieve significant improvements over the benchmark model. Our work is carried out using EMBER, a major published malware benchmark dataset of Windows Portable Execution (PE) metadata samples, and a smaller dataset from kaggle.com (also comprising of Windows Portable Execution metadata). Overall, we conclude that HPO is an essential part of credible evaluations of ML-based malware detection models. We also demonstrate that high-performing hyper-parameter values can be found by HPO and that these can be found efficiently
    corecore