1,877 research outputs found

    Improved Annealing-Genetic Algorithm for Test Case Prioritization

    Get PDF
    Regression testing, which can improve the quality of software systems, is a useful but time consuming method. Many techniques have been introduced to reduce the time cost of regression testing. Among these techniques, test case prioritization is an effective technique which can reduce the time cost by processing relatively more important test cases at an earlier stage. Previous works have demonstrated that some greedy algorithms are effective for regression test case prioritization. Those algorithms, however, have lower stability and scalability. For this reason, this paper proposes a new regression test case prioritization approach based on the improved Annealing-Genetic algorithm which incorporates Simulated Annealing algorithm and Genetic algorithm to explore a bigger potential solution space for the global optimum. Three Java programs and five C programs were employed to evaluate the performance of the new approach with five former approaches such as Greedy, Additional Greedy, GA, etc. The experimental results showed that the proposed approach has relatively better performance as well as higher stability and scalability than those former approaches

    A Survey on Software Testing Techniques using Genetic Algorithm

    Full text link
    The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.Comment: 13 Page

    Practical Combinatorial Interaction Testing: Empirical Findings on Efficiency and Early Fault Detection

    Get PDF
    Combinatorial interaction testing (CIT) is important because it tests the interactions between the many features and parameters that make up the configuration space of software systems. Simulated Annealing (SA) and Greedy Algorithms have been widely used to find CIT test suites. From the literature, there is a widely-held belief that SA is slower, but produces more effective tests suites than Greedy and that SA cannot scale to higher strength coverage. We evaluated both algorithms on seven real-world subjects for the well-studied two-way up to the rarely-studied six-way interaction strengths. Our findings present evidence to challenge this current orthodoxy: real-world constraints allow SA to achieve higher strengths. Furthermore, there was no evidence that Greedy was less effective (in terms of time to fault revelation) compared to SA; the results for the greedy algorithm are actually slightly superior. However, the results are critically dependent on the approach adopted to constraint handling. Moreover, we have also evaluated a genetic algorithm for constrained CIT test suite generation. This is the first time strengths higher than 3 and constraint handling have been used to evaluate GA. Our results show that GA is competitive only for pairwise testing for subjects with a small number of constraints

    An Analysis on the Applicability of Meta-Heuristic Searching Techniques for Automated Test Data Generation in Automatic Programming Assessment

    Get PDF
    حظي تقييم البرمجة التلقائي (APA) بالكثير من الاهتمام بين الباحثين بشكل أساسي لدعم الدرجات الآلية ووضع علامات على المهامالمكلف بادائها الطلاب أو التدريبات بشكل منهجي. يتم تعريف APA بشكل شائع كطريقة يمكن أن تعزز الدقة والكفاءة والاتساق وكذلك تقديمملاحظات فورية لحلول للطلاب. في تحقيق APA ، تعد عملية إنشاء بيانات الاختبار مهمة للغاية وذلك لإجراء اختبار ديناميكي لمهمةالطلاب. في مجال اختبار البرمجيات ، أوضحت العديد من الأبحاث التي تركز على توليد بيانات الاختبار نجاح اعتماد تقنيات البحث الفوقية(MHST) من أجل تعزيز إجراءات استنباط بيانات الاختبار المناسبة للاختبار الفعال. ومع ذلك، فإن الأبحاث التي أجريت على APA حتىالآن لم تستغل بعد التقنيات المفيدة لتشمل تغطية اختبار جودة برنامج أفضل. لذلك ، أجرت هذه الدراسة تقييماً مقارنا لتحديد أي تقنية بحثفوقي قابلة للتطبيق لدعم كفاءة توليد بيانات الاختبار الآلي (ATDG) في تنفيذ اختبار وظيفي ديناميكي. في تقييم البرمجة التلقائي يتم تضمينالعديد من تقنيات البحث الفوقية الحديثة في التقييم المقارن الذي يجمع بين كل من خوارزميات البحث المحلية والعالمية من عام 2000 حتىعام 2018 .تشير نتيجة هذه الدراسة إلى أن تهجين Cuckoo Search مع Tabu Search و lévy flight كواحدة من طرق البحث الفوقية الواعدةليتم تطبيقها ، حيث أنه يتفوق على الطرق الفوقية الأخرى فيما يتعلق بعدد التكرارات ونطاق المدخلات.Automatic Programming Assessment (APA) has been gaining lots of attention among researchers mainly to support automated grading and marking of students’ programming assignments or exercises systematically. APA is commonly identified as a method that can enhance accuracy, efficiency and consistency as well as providing instant feedback on students’ programming solutions. In achieving APA, test data generation process is very important so as to perform a dynamic testing on students’ assignment. In software testing field, many researches that focus on test data generation have demonstrated the successful of adoption of Meta-Heuristic Search Techniques (MHST) so as to enhance the procedure of deriving adequate test data for efficient testing. Nonetheless, thus far the researches on APA have not yet usefully exploited the techniques accordingly to include a better quality program testing coverage. Therefore, this study has conducted a comparative evaluation to identify any applicable MHST to support efficient Automated Test Data Generation (ATDG) in executing a dynamic-functional testing in APA. Several recent MHST are included in the comparative evaluation combining both the local and global search algorithms ranging from the year of 2000 until 2018. Result of this study suggests that the hybridization of Cuckoo Search with Tabu Search and lévy flight as one of promising MHST to be applied, as it’s outperforms other MHST with regards to number of iterations and range of inputs

    A Systematic Review of the Application and Empirical Investigation of Search-Based Test Case Generation

    Get PDF
    Otsingupõhine tarkvara testimine kasutab metaheuristilisi algoritme, et automatiseerida testide genereerimist. Selle töö eesmärgiks on osaliselt taasluua 2010. aastal kirjutatud Ali et al. artikkel, et uurida, kuidas on aastatel 2008-2015 kasutatud metaheuristilisi algoritme testide loomiseks. See töö analüüsib, kuidas on antud artiklid koostatud ning kuidas neis on algoritmide maksumust ja efektiivsust hinnatud. Kogutud tulemusi võrreldakse Ali et al. tulemustega.Search based software testing uses metaheuristic algorithms to automate the generation of test cases. This thesis partially replicates a literature study published in 2010 by Ali et al. to determine how studies published in 2008-2015 use metaheuristic algorithms to automate the generation of test cases. The thesis analyses how these studies were conducted and how the cost-effectiveness is assessed in these papers. The trends detected in the new publications are compared to those presented in Ali et al

    Search based software engineering: Trends, techniques and applications

    Get PDF
    © ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available from the link below.In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This article provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.EPSRC and E

    A hierarchical, fuzzy inference approach to data filtration and feature prioritization in the connected manufacturing enterprise

    Get PDF
    In manufacturing, the technology to capture and store large volumes of data developed earlier and faster than corresponding capabilities to analyze, interpret, and apply it. The result for many manufacturers is a collection of unanalyzed data and uncertainty with respect to where to begin. This paper examines big data as both an enabler and a challenge for the connected manufacturing enterprise and presents a framework that sequentially tests and selects independent variables for training applied machine learning models. Unsuitable features are discarded, and each remaining feature receives a crisp numeric output and a linguistic label, both of which are measures of the feature’s suitability. The framework is tested using three datasets employing time series, binary, and continuous input data. Results of filtered models are compared to results obtained by base, unfiltered sets of features using a proposed metric of performance-size ratio. Framework results outperform base feature sets in all tested cases, and the proposed future research will be to implement it in a case study in the electronic assembly manufacture

    Generalized Clusterwise Regression for Simultaneous Estimation of Optimal Pavement Clusters and Performance Models

    Full text link
    The existing state-of-the-art approach of Clusterwise Regression (CR) to estimate pavement performance models (PPMs) pre-specifies explanatory variables without testing their significance; as an input, this approach requires the number of clusters for a given data set. Time-consuming ‘trial and error’ methods are required to determine the optimal number of clusters. A common objective function is the minimization of the total sum of squared errors (SSE). Given that SSE decreases monotonically as a function of the number of clusters, the optimal number of clusters with minimum SSE always is the total number of data points. Hence, the minimization of SSE is not the best objective function to seek for an optimal number of clusters. In previous studies, the PPMs were restricted to be either linear or nonlinear, irrespective of which functional form provided the best results. The existing mathematical programming formulations did not include constraints that ensured the minimum number of observations required in each cluster to achieve statistical significance. In addition, a pavement sample could be associated with multiple performance models. Hence, additional modeling was required to combine the results from multiple models. To address all these limitations, this research proposes a generalized CR that simultaneously 1) finds the optimal number of pavement clusters, 2) assigns pavement samples into clusters, 3) estimates the coefficients of cluster-specific explanatory variables, and 4) determines the best functional form between linear and nonlinear models. Linear and nonlinear functional forms were investigated to select the best model specification. A mixed-integer nonlinear mathematical program was formulated with the Bayesian Information Criteria (BIC) as the objective function. The advantage of using BIC is that it penalizes for including additional parameters (i.e., number of clusters and/or explanatory variables). Hence, the optimal CR models provided a balance between goodness of fit and model complexity. In addition, the search process for the best model specification using BIC has the property of consistency, which asymptotically selects this model with a probability of ‘1’. Comprehensive solution algorithms – Simulated Annealing coupled with Ordinary Least Squares for linear models and All Subsets Regression for nonlinear models – were implemented to solve the proposed mathematical problem. The algorithms selected the best model specification for each cluster after exploring all possible combinations of potentially significant explanatory variables. Potential multicollinearity issues were investigated and addressed as required. Variables identified as significant explanatory variables were average daily traffic, pavement age, rut depth along the pavement, annual average precipitation and minimum temperature, road functional class, prioritization category, and the number of lanes. All these variables were considered in the literature as the most critical factors for pavement deterioration. In addition, the predictive capability of the estimated models was investigated. The results showed that the models were robust without any overfitting issues, and provided small prediction errors. The models developed using the proposed approach provided superior explanatory power compared to those that were developed using the existing state-of-the-art approach of clusterwise regression. In particular, for the data set used in this research, nonlinear models provided better explanatory power than did the linear models. As expected, the results illustrated that different clusters might require different explanatory variables and associated coefficients. Similarly, determining the optimal number of clusters while estimating the corresponding PPMs contributed significantly to reduce the estimation error

    Test Case Reduction Using Ant Colony Optimization for Object Oriented Program

    Get PDF
    Software testing is one in all the vital stages of system development. In software development, developers continually depend upon testing to reveal bugs. Within the maintenance stage test suite size grow due to integration of new functionalities. Addition of latest technique force to make new test case which increase the cost of test suite. In regression testing new test case could also be added to the test suite throughout the entire testing process. These additions of test cases produce risk of presence of redundant test cases. Because of limitation of time and resource, reduction techniques should be accustomed determine and take away. Analysis shows that a set of the test case in a suit should satisfy all the test objectives that is named as representative set. Redundant test case increase the execution price of the test suite, in spite of NP-completeness of the problem there are few sensible reduction techniques are available. During this paper the previous GA primarily based technique proposed is improved to search out cost optimum representative set using ant colony optimization
    corecore