1,235 research outputs found

    Test case prioritization approaches in regression testing: A systematic literature review

    Get PDF
    Context Software quality can be assured by going through software testing process. However, software testing phase is an expensive process as it consumes a longer time. By scheduling test cases execution order through a prioritization approach, software testing efficiency can be improved especially during regression testing. Objective It is a notable step to be taken in constructing important software testing environment so that a system's commercial value can increase. The main idea of this review is to examine and classify the current test case prioritization approaches based on the articulated research questions. Method Set of search keywords with appropriate repositories were utilized to extract most important studies that fulfill all the criteria defined and classified under journal, conference paper, symposiums and workshops categories. 69 primary studies were nominated from the review strategy. Results There were 40 journal articles, 21 conference papers, three workshop articles, and five symposium articles collected from the primary studies. As for the result, it can be said that TCP approaches are still broadly open for improvements. Each approach in TCP has specified potential values, advantages, and limitation. Additionally, we found that variations in the starting point of TCP process among the approaches provide a different timeline and benefit to project manager to choose which approaches suite with the project schedule and available resources. Conclusion Test case prioritization has already been considerably discussed in the software testing domain. However, it is commonly learned that there are quite a number of existing prioritization techniques that can still be improved especially in data used and execution process for each approach

    How Time-Fault Ratio helps in Test Case Prioritization for Regression Testing

    Get PDF
    Regression testing analyzes whether the maintenance of the software has adversely affected its normal functioning. Regression testing is generally performed under the strict time constraints. Due to limited time budget, it is not possible to test the software with all available test cases. Thus, the reordering of the test cases, on the basis of their effectiveness, is always needed. A test prioritization technique, which prioritizes the test cases on the basis of their Time -Fault Ratio (TFR), has been proposed in this paper. The technique tends to maximize the fault detection as the faults are exposed in the ascending order of their detection times. The proposed technique may be used at any stage of software development

    Amortising the Cost of Mutation Based Fault Localisation using Statistical Inference

    Full text link
    Mutation analysis can effectively capture the dependency between source code and test results. This has been exploited by Mutation Based Fault Localisation (MBFL) techniques. However, MBFL techniques suffer from the need to expend the high cost of mutation analysis after the observation of failures, which may present a challenge for its practical adoption. We introduce SIMFL (Statistical Inference for Mutation-based Fault Localisation), an MBFL technique that allows users to perform the mutation analysis in advance against an earlier version of the system. SIMFL uses mutants as artificial faults and aims to learn the failure patterns among test cases against different locations of mutations. Once a failure is observed, SIMFL requires either almost no or very small additional cost for analysis, depending on the used inference model. An empirical evaluation of SIMFL using 355 faults in Defects4J shows that SIMFL can successfully localise up to 103 faults at the top, and 152 faults within the top five, on par with state-of-the-art alternatives. The cost of mutation analysis can be further reduced by mutation sampling: SIMFL retains over 80% of its localisation accuracy at the top rank when using only 10% of generated mutants, compared to results obtained without sampling

    Simulation-based testing of highly configurable cyber-physical systems: automation, optimization and debugging

    Get PDF
    Sistema Ziber-Fisikoek sistema ziber digitalak sistema fisikoekin uztartzen dituzte. Sistema hauen aldakortasuna handitzen ari da erabiltzaileen hainbat behar betetzeko. Ondorioz, sistema ziber-fisikoa aldakorrak edota produktu lerroak ari dira garatzen eta sistema hauek milaka edo milioika konfiguraziotan konfiguratu daitezke. Sistema ziber-fisiko aldakorren test eta balidazioa prozesua garestia da, batez ere probatu beharreko konfigurazio kopuruaren ondorioz. Konfigurazio kopuru altuak sistemaren prototipo bat erabiltzea ezinezkoa egiten du. Horregatik, sistema ziber-fisiko aldagarriak simulazio modeloak erabilita probatzen dira. Hala ere, simulazio bidez sistema ziber-fisikoak probatzea erronka izaten jarraitzen du. Hasteko, simulazio denbora altua izaten da normalki, software-az aparte, sistema fisikoa simulatu behar delako. Sistema fisiko hau normalean modelo matematiko konplexuen bitartez modelatzen da, konputazionalki garestia delarik. Jarraitzeko, sistema ziber-fisikoek ingeniaritzaren domeinu ezberdinak dituzte tartean, adibidez mekanika edo elektronika. Domeinu bakoitzak bere simulazio erremienta erabiltzen du, eta erremienta guzti hauek interkonektatzeko ko-simulazioa erabiltzen da. Nahiz eta ko-simulazioa abantaila bat izan ematen duen flexibilitateagatik, simulagailu ezberdinen erabilerak simulazio denbora handiagotzen du. Azkenik, sistema ziber-fisikoak simulaziopean probatzean, probak maila ezberdinetan egin behar dira (adb., Model, Software eta Hardware-in-the-Loop mailak), eta honek, proba-kasuak exekutatzeko denbora handitzen du. Tesi honen helburua sistema ziber-fisiko aldakorren test jardunbideak hobetzea da, horretarako automatizazio, optimizazio eta arazketa metodoak proposatzen ditu. Automatizazioari dagokionez, lehenengo, erremienta-bidezko metodologia bat proposatzen da. Metodologia hau test sistema instantziak automatikoki sortzeko gai da, test sistema hauek sistema ziber-fisiko aldagarrien konfigurazioak automatikoki probatzeko gai dira (adb., test orakuluen bitartez). Bigarren, test frogak automatikoki sortzeko planteamendu bat proposatzen da helburu anitzeko bilaketa algoritmoak erabilita. Optimizazioari dagokionez, test frogen aukeraketarako planteamendu bat eta test frogen priorizaziorako beste planteamendu bat proposatzen dira, biak bilaketa alix goritmoak erabiliz, sistema ziber-fisiko aldakorrak test maila ezberdinetan probatzeko helburuarekin. Arazketari dagokionez, “espektroan oinarritutako falten lokalizazioa” izeneko teknika bat produktu lerroen testuingurura adaptatu da, eta faltak isolatzeko metodo bat proposatzen da. Honek, falta ezberdinak lokalizatzea errezten du ez bakarrik sistema ziber-fisiko aldakorretan, baizik eta edozein produktu lerrotan non “feature model” delako modeloak erabiltzen diren aldakortasuna kudeatzeko.Los sistemas cyber-físicos (CPSs) integran tecnologías digitales con procesos físicos. La variabilidad de estos sistemas está creciendo para responder a la demanda de diferentes clientes. Como consecuencia de ello, los CPSs están volviéndose configurables e incluso líneas de producto, lo que significa que pueden ser configurados en miles y millones de configuraciones. El testeo de sistemas cyber-físicos configurables es un proceso costoso, en general debido a la cantidad de configuraciones que han de ser testeadas. El número de configuraciones a testear hace imposible el uso de un prototipo del sistema. Por ello, los sistemas CPSs configurables están siendo testeadas utilizando modelos de simulación. Sin embargo, el testeo de sistemas cyber-físicos bajo simulación sigue siendo un reto. Primero, el tiempo de simulación es normalmente largo, ya que, además del software, la capa física del CPS ha de ser testeada. Esta capa física es típicamente modelada con modelos matemáticos complejos, lo cual es computacionalmente caro. Segundo, los sistemas cyber-físicos implican el uso de diferentes dominios de la ingeniería, como por ejemplo la mecánica o la electrónica. Por ello, para interconectar diferentes herramientas de modelado y simulación hace falta el uso de la co-simulación. A pesar de que la co-simulación es una ventaja en términos de flexibilidad para los ingenieros, el uso de diferentes simuladores hace que el tiempo de simulación sea más largo. Por último, al testear sistemas cyberfísicos haciendo uso de simulación, existen diferentes niveles (p.ej., Model, Software y Hardware-in-the-Loop), lo cual incrementa el tiempo para ejecutar casos de test. Esta tesis tiene como objetivo avanzar en la práctica actual del testeo de sistemas cyber-físicos configurables, proponiendo métodos para la automatización, optimización y depuración. En cuanto a la automatización, primero, se propone una metodología soportada por una herramienta para generar automáticamente instancias de sistemas de test que permiten testear automáticamente configuraciones del sistema CPS configurable (p.ej., haciendo uso de oráculos de test). Segundo, se propone un enfoque para generación de casos de test basado en algoritmos de búsqueda multiobjetivo, los cuales generan un conjunto de casos de test. En cuanto a la optimización, se propone un enfoque para selección y otro para priorización de casos de test, ambos basados en algoritmos de búsqueda, de cara a testear eficientemente sistemas cyberfísicos configurables en diferentes niveles de test. En cuanto a la depuración, se adapta una técnica llamada “Localización de Fallos Basada en Espectro” al contexto de líneas de productos y proponemos un método de aislamiento de fallos. Esto permite localizar bugs no solo en sistemas cyber-físicos configurables sino también en cualquier línea de producto donde se utilicen modelos de características para gestionar la variabilidad.Cyber-Physical Systems (CPSs) integrate digital cyber technologies with physical processes. The variability of these systems is increasing in order to give solution to the different customers demands. As a result, CPSs are becoming configurable or even product lines, which means that they can be set into thousands or millions of configurations. Testing configurable CPSs is a time consuming process, mainly due to the large amount of configurations that need to be tested. The large amount of configurations that need to be tested makes it infeasible to use a prototype of the system. As a result, configurable CPSs are being tested using simulation. However, testing CPSs under simulation is still challenging. First, the simulation time is usually long, since apart of the software, the physical layer needs to be simulated. This physical layer is typically modeled with complex mathematical models, which is computationally very costly. Second, CPSs involve different domains, such as, mechanical and electrical. Engineers of different domains typically employ different tools for modeling their subsystems. As a result, co-simulation is being employed to interconnect different modeling and simulation tools. Despite co-simulation being an advantage in terms of engineers flexibility, the use of different simulation tools makes the simulation time longer. Lastly, when testing CPSs employing simulation, different test levels exist (i.e., Model, Software and Hardware-in-the-Loop), what increases the time for executing test cases. This thesis aims at advancing the current practice on testing configurable CPSs by proposing methods for automation, optimization and debugging. Regarding automation, first, we propose a tool supported methodology to automatically generate test system instances that permit automatically testing configurations of the configurable CPS (e.g., by employing test oracles). Second, we propose a test case generation approach based on multi-objective search algorithms that generate cost-effective test suites. As for optimization, we propose a test case selection and a test case prioritization approach, both of them based on search algorithms, to cost-effectively test configurable CPSs at different test levels. Regarding debugging, we adapt a technique named Spectrum-Based Fault Localization to the product line engineering context and propose a fault isolation method. This permits localizing bugs not only in configurable CPSs but also in any product line where feature models are employed to model variability

    An empirical study on the use of defect prediction for test case prioritization

    Get PDF
    Test case prioritization has been extensively re-searched as a means for reducing the time taken to discover regressions in software. While many different strategies have been developed and evaluated, prior experiments have shown them to not be effective at prioritizing test suites to find real faults. This paper presents a test case prioritization strategy based on defect prediction, a technique that analyzes code features - such as the number of revisions and authors - to estimate the likelihood that any given Java class will contain a bug. Intuitively, if defect prediction can accurately predict the class that is most likely to be buggy, a tool can prioritize tests to rapidly detect the defects in that class. We investigated how to configure a defect prediction tool, called Schwa, to maximize the likelihood of an accurate prediction, surfacing the link between perfect defect prediction and test case prioritization effectiveness. Using 6 real-world Java programs containing 395 real faults, we conducted an empirical evaluation comparing this paper's strategy, called G-clef, against eight existing test case prioritization strategies. The experiments reveal that using defect prediction to prioritize test cases reduces the number of test cases required to find a fault by on average 9.48% when compared with existing coverage-based strategies, and 10.4% when compared with existing history-based strategies

    Variable-Based Fault Localization via Enhanced Decision Tree

    Full text link
    Fault localization, aiming at localizing the root cause of the bug under repair, has been a longstanding research topic. Although many approaches have been proposed in the last decades, most of the existing studies work at coarse-grained statement or method levels with very limited insights about how to repair the bug (granularity problem), but few studies target the finer-grained fault localization. In this paper, we target the granularity problem and propose a novel finer-grained variable-level fault localization technique. Specifically, we design a program-dependency-enhanced decision tree model to boost the identification of fault-relevant variables via discriminating failed and passed test cases based on the variable values. To evaluate the effectiveness of our approach, we have implemented it in a tool called VARDT and conducted an extensive study over the Defects4J benchmark. The results show that VARDT outperforms the state-of-the-art fault localization approaches with at least 247.8% improvements in terms of bugs located at Top-1, and the average improvements are 330.5%. Besides, to investigate whether our finer-grained fault localization result can further improve the effectiveness of downstream APR techniques, we have adapted VARDT to the application of patch filtering, where VARDT outperforms the state-of-the-art PATCH-SIM by filtering 26.0% more incorrect patches. The results demonstrate the effectiveness of our approach and it also provides a new way of thinking for improving automatic program repair techniques

    Hybrid and dynamic static criteria models for test case prioritization of web application regression testing

    Get PDF
    In software testing domain, different techniques and approaches are used to support the process of regression testing in an effective way. The main approaches include test case minimization, test case selection, and test case prioritization. Test case prioritization techniques improve the performance of regression testing by arranging test cases in such a way that maximize fault detection could be achieved in a shorter time. However, the problems for web testing are the timing for executing test cases and the number of fault detected. The aim of this study is to increase the effectiveness of test case prioritization by proposing an approach that could detect faults earlier at a shorter execution time. This research proposed an approach comprising two models: Hybrid Static Criteria Model (HSCM) and Dynamic Weighting Static Criteria Model (DWSCM). Each model applied three criteria: most common HTTP requests in pages, length of HTTP request chains, and dependency of HTTP requests. These criteria are used to prioritize test cases for web application regression testing. The proposed HSCM utilized clustering technique to group test cases. A hybridized technique was proposed to prioritize test cases by relying on assigned test case priorities from the combination of aforementioned criteria. A dynamic weighting scheme of criteria for prioritizing test cases was used to increase fault detection rate. The findings revealed that, the models comprising enhanced of Average Percentage Fault Detection (APFD), yielded the highest APFD of 98% in DWSCM and 87% in HSCM, which have led to improve effectiveness prioritization models. The findings confirmed the ability of the proposed techniques in improving web application regression testing

    SOFTWARE UNDER TEST DALAM PENELITIAN SOFTWARE TESTING: SEBUAH REVIEW

    Get PDF
    Software under Test (SUT) is an essential aspect of software testing research activities. Preparation of the SUT is not simple. It requires accuracy, completeness and will affect the quality of the research conducted. Currently, there are several ways to utilize an SUT in software testing research: building an own SUT, utilization of open source to build an SUT, and SUT from the repository utilization. This article discusses the results of SUT identification in many software testing studies. The research is conducted in a systematic literature review (SLR) using the Kitchenham protocol. The review process is carried out on 86 articles published in 2017-2020. The article was selected after two selection stages: the Inclusion and Exclusion Criteria and the quality assessment. The study results show that the trend of using open source is very dominant. Some researchers use open source as the basis for developing SUT, while others use SUT from a repository that provides ready-to-use SUT. In this context, utilization of the SUT from the software infrastructure repository (SIR) and Defect4J are the most significant choice of researchers
    corecore