741 research outputs found

    Test-aware combinatorial interaction testing

    Get PDF
    Combinatorial interaction testing (CIT) approaches system- atically sample a given configuration space and select a set of configurations, in which each valid t-way option setting combination appears at least once. A battery of test cases are then executed in the selected configurations. Exist- ing CIT approaches, however, do not provide a system- atic way of handling test-specific inter-option constraints. Improper handling of such constraints, on the other hand, causes masking effects, which in turn causes testers to de- velop false confidence in their test processes, believing them have tested certain option setting combinations, when they in fact have not. In this work, to avoid the harmful conse- quences of masking effects caused by improper handling of test-specific constraints, we compute t-way test-aware cov- ering arrays. A t-way test-aware covering array is not just a set of configurations as is the case in traditional covering arrays, but a set of configurations, each of which is asso- ciated with a set of test cases. We furthermore present a set of empirical studies conducted by using two widely-used highly-configurable software systems as our subject applica- tions, demonstrating that test-specific constraints are likely to occur in practice and the proposed approach is a promis- ing and effective way of handling them

    Optimization Techniques for Automated Software Test Data Generation

    Get PDF
    Esta tesis propone una variedad de contribuciones al campo de pruebas evolutivas. Hemos abarcados un amplio rango de aspectos relativos a las pruebas de programas: código fuente procedimental y orientado a objetos, paradigmas estructural y funcional, problemas mono-objetivo y multi-objetivo, casos de prueba aislados y secuencias de pruebas, y trabajos teóricos y experimentales. En relación a los análisis llevados a cabo, hemos puesto énfasis en el análisis estadístico de los resultados para evaluar la significancia práctica de los resultados. En resumen, las principales contribuciones de la tesis son: Definición de una nueva medida de distancia para el operador instanceof en programas orientados a objetos: En este trabajo nos hemos centrado en un aspecto relacionado con el software orientado a objetos, la herencia, para proponer algunos enfoques que pueden ayudar a guiar la búsqueda de datos de prueba en el contexto de las pruebas evolutivas. En particular, hemos propuesto una medida de distancia para computar la distancia de ramas en presencia del operador instanceof en programas Java. También hemos propuesto dos operadores de mutación que modifican las soluciones candidatas basadas en la medida de distancia definida. Definición de una nueva medida de complejidad llamada ``Branch Coverage Expectation'': En este trabajo nos enfrentamos a la complejidad de pruebas desde un punto de vista original: un programa es más complejo si es más difícil de probar de forma automática. Consecuentemente, definimos la ``Branch Coverage Expectation'' para proporcionar conocimiento sobre la dificultad de probar programas. La fundación de esta medida se basa en el modelo de Markov del programa. El modelo de Markov proporciona fundamentos teóricos. El análisis de esta medida indica que está más correlacionada con la cobertura de rama que las otras medidas de código estáticas. Esto significa que esto es un buen modo de estimar la dificultad de probar un programa. Predicción teórica del número de casos de prueba necesarios para cubrir un porcentaje concreto de un programa: Nuestro modelo de Markov del programa puede ser usado para proporcionar una estimación del número de casos de prueba necesarios para cubrir un porcentaje concreto del programa. Hemos comparado nuestra predicción teórica con la media de las ejecuciones reales de un generador de datos de prueba. Este modelo puede ayudar a predecir la evolución de la fase de pruebas, la cual consecuentemente puede ahorrar tiempo y coste del proyecto completo. Esta predicción teórica podría ser también muy útil para determinar el porcentaje del programa cubierto dados un número de casos de prueba. Propuesta de enfoques para resolver el problema de generación de datos de prueba multi-objetivo: En ese capítulo estudiamos el problema de la generación multi-objetivo con el fin de analizar el rendimiento de un enfoque directo multi-objetivo frente a la aplicación de un algoritmo mono-objetivo seguido de una selección de casos de prueba. Hemos evaluado cuatro algoritmos multi-objetivo (MOCell, NSGA-II, SPEA2, y PAES) y dos algoritmos mono-objetivo (GA y ES), y dos algoritmos aleatorios. En términos de convergencia hacía el frente de Pareto óptimo, GA y MOCell han sido los mejores resolutores en nuestra comparación. Queremos destacar que el enfoque mono-objetivo, donde se ataca cada rama por separado, es más efectivo cuando el programa tiene un grado de anidamiento alto. Comparativa de diferentes estrategias de priorización en líneas de productos y árboles de clasificación: En el contexto de pruebas funcionales hemos tratado el tema de la priorización de casos de prueba con dos representaciones diferentes, modelos de características que representan líneas de productos software y árboles de clasificación. Hemos comparado cinco enfoques relativos al método de clasificación con árboles y dos relativos a líneas de productos, cuatro de ellos propuestos por nosotros. Los resultados nos indican que las propuestas para ambas representaciones basadas en un algoritmo genético son mejores que el resto en la mayoría de escenarios experimentales, es la mejor opción cuando tenemos restricciones de tiempo o coste. Definición de la extensión del método de clasificación con árbol para la generación de secuencias de pruebas: Hemos definido formalmente esta extensión para la generación de secuencias de pruebas que puede ser útil para la industria y para la comunidad investigadora. Sus beneficios son claros ya que indudablemente el coste de situar el artefacto bajo pruebas en el siguiente estado no es necesario, a la vez que reducimos significativamente el tamaño de la secuencia utilizando técnicas metaheurísticas. Particularmente nuestra propuesta basada en colonias de hormigas es el mejor algoritmo de la comparativa, siendo el único algoritmo que alcanza la cobertura máxima para todos los modelos y tipos de cobertura. Exploración del efecto de diferentes estrategias de seeding en el cálculo de frentes de Pareto óptimos en líneas de productos: Estudiamos el comportamiento de algoritmos clásicos multi-objetivo evolutivos aplicados a las pruebas por pares de líneas de productos. El grupo de algoritmos fue seleccionado para cubrir una amplia y diversa gama de técnicas. Nuestra evaluación indica claramente que las estrategias de seeding ayudan al proceso de búsqueda de forma determinante. Cuanta más información se disponga para crear esta población inicial, mejores serán los resultados obtenidos. Además, gracias al uso de técnicas multi-objetivo podemos proporcionar un conjunto de pruebas adecuado mayor o menor, en resumen, que mejor se adapte a sus restricciones económicas o tecnológicas. Propuesta de técnica exacta para la computación del frente de Pareto óptimo en líneas de productos software: Hemos propuesto un enfoque exacto para este cálculo en el caso multi-objetivo con cobertura paiwise. Definimos un programa lineal 0-1 y un algoritmo basado en resolutores SAT para obtener el frente de Pareto verdadero. La evaluación de los resultados nos indica que, a pesar de ser un fantástico método para el cálculo de soluciones óptimas, tiene el inconveniente de la escalabilidad, ya que para modelos grandes el tiempo de ejecución sube considerablemente. Tras realizar un estudio de correlaciones, confirmamos nuestras sospechas, existe una alta correlación entre el tiempo de ejecución y el número de productos denotado por el modelo de características del programa

    Moving forward with combinatorial interaction testing

    Get PDF
    Combinatorial interaction testing (CIT) is an efficient and effective method of detecting failures that are caused by the interactions of various system input parameters. In this paper, we discuss CIT, point out some of the difficulties of applying it in practice, and highlight some recent advances that have improved CIT’s applicability to modern systems. We also provide a roadmap for future research and directions; one that we hope will lead to new CIT research and to higher quality testing of industrial systems

    Evolutionary algorithm for prioritized pairwise test data generation

    Get PDF
    Ferrer, J., Kruse P. M., Chicano F., & Alba E. (2012). Evolutionary algorithm for prioritized pairwise test data generation. (Soule, T., & Moore J. H., Ed.).Genetic and Evolutionary Computation Conference, GECCO '12, Philadelphia, PA, USA, July 7-11, 2012. 1213–1220.Combinatorial Interaction Testing (CIT) is a technique used to discover faults caused by parameter interactions in highly configurable systems. These systems tend to be large and exhaustive testing is generally impractical. Indeed, when the resources are limited, prioritization of test cases is a must. Important test cases are assigned a high priority and should be executed earlier. On the one hand, the prioritization of test cases may reveal faults in early stages of the testing phase. But, on the other hand the generation of minimal test suites that fulfill the demanded coverage criteria is an NP-hard problem. Therefore, search based approaches are required to find the (near) optimal test suites. In this work we present a novel evolutionary algorithm to deal with this problem. The experimental analysis compares five techniques on a set of benchmarks. It reveals that the evolutionary approach is clearly the best in our comparison. The presented algorithm can be integrated into a professional tool for CIT.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Prioritization of combinatorial test cases by incremental interaction coverage

    Get PDF
    Combinatorial testing is a well-recognized testing method, and has been widely applied in practice. To facilitate analysis, a common approach is to assume that all test cases in a combinatorial test suite have the same fault detection capability. However, when testing resources are limited, the order of executing the test cases is critical. To improve testing cost-effectiveness, prioritization of combinatorial test cases is employed. The most popular approach is based on interaction coverage, which prioritizes combinatorial test cases by repeatedly choosing an unexecuted test case that covers the largest number on uncovered parameter value combinations of a given strength (level of interaction among parameters). However, this approach suffers from some drawbacks. Based on previous observations that the majority of faults in practical systems can usually be triggered with parameter interactions of small strengths, we propose a new strategy of prioritizing combinatorial test cases by incrementally adjusting the strength values. Experimental results show that our method performs better than the random prioritization technique and the technique of prioritizing combinatorial test suites according to test case generation order, and has better performance than the interaction-coverage-based test prioritization technique in most cases

    New metrics for prioritized interaction test suites

    Get PDF
    Combinatorial interaction testing has been well studied in recent years, and has been widely applied in practice. It generally aims at generating an effective test suite (an interaction test suite) in order to identify faults that are caused by parameter interactions. Due to some constraints in practical applications (e.g. limited testing resources), for example in combinatorial interaction regression testing, prioritized interaction test suites (called interaction test sequences) are often employed. Consequently, many strategies have been proposed to guide the interaction test suite prioritization. It is, therefore, important to be able to evaluate the different interaction test sequences that have been created by different strategies. A well-known metric is the Average Percentage of Combinatorial Coverage (shortly APCCλ), which assesses the rate of interaction coverage of a strength λ (level of interaction among parameters) covered by a given interaction test sequence S. However, APCCλ has two drawbacks: firstly, it has two requirements (that all test cases in S be executed, and that all possible λ-wise parameter value combinations be covered by S); and secondly, it can only use a single strength λ (rather than multiple strengths) to evaluate the interaction test sequence - which means that it is not a comprehensive evaluation. To overcome the first drawback, we propose an enhanced metric Normalized APCCλ (NAPCC) to replace the APCCλ Additionally, to overcome the second drawback, we propose three new metrics: the Average Percentage of Strengths Satisfied (APSS); the Average Percentage of Weighted Multiple Interaction Coverage (APWMIC); and the Normalized APWMIC (NAPWMIC). These metrics comprehensively assess a given interaction test sequence by considering different interaction coverage at different strengths. Empirical studies show that the proposed metrics can be used to distinguish different interaction test sequences, and hence can be used to compare different test prioritization strategies

    Multi-objective test case prioritization in highly configurable systems: A case study

    Get PDF
    Test case prioritization schedules test cases for execution in an order that attempts to accelerate the detection of faults. The order of test cases is determined by prioritization objectives such as covering code or critical components as rapidly as possible. The importance of this technique has been recognized in the context of Highly-Configurable Systems (HCSs), where the potentially huge number of configurations makes testing extremely challenging. However, current approaches for test case prioritization in HCSs suffer from two main limitations. First, the prioritization is usually driven by a single objective which neglects the potential benefits of combining multiple criteria to guide the detection of faults. Second, instead of using industry-strength case studies, evaluations are conducted using synthetic data, which provides no information about the effectiveness of different prioritization objectives. In this paper, we address both limitations by studying 63 combinations of up to three prioritization objectives in accelerating the detection of faults in the Drupal framework. Results show that non–functional properties such as the number of changes in the features are more effective than functional metrics extracted from the configuration model. Results also suggest that multi-objective prioritization typically results in faster fault detection than mono-objective prioritization.CICYT TIN2012-32273CICYT TIN2015-70560-RJunta de Andalucía P12-TIC- 186

    Abstract test case prioritization using repeated small-strength level-combination coverage

    Get PDF
    Abstract—Abstract Test Cases (ATCs) have been widely used in practice, including in combinatorial testing and in software product line testing. When constructing a set of ATCs, due to limited testing resources in practice (for example in regression testing), Test Case Prioritization (TCP) has been proposed to improve the testing quality, aiming at ordering test cases to increase the speed with which faults are detected. One intuitive and extensively studied TCP technique for ATCs is λ-wise Level-combination Coverage based Prioritization (λLCP), a static, black-box prioritization technique that only uses the ATC information to guide the prioritization process. A challenge facing λLCP, however, is the necessity for the selection of the fixed prioritization strength λ before testing — testers need to choose an appropriate λ value before testing begins. Choosing higher λ values may improve the testing effectiveness of λLCP (for example, by finding faults faster), but may reduce the testing efficiency (by incurring additional prioritization costs). Conversely, choosing lower λ values may improve the efficiency, but may also reduce the effectiveness. In this paper, we propose a new family of λLCP techniques, Repeated Small-strength Level-combination Coverage-based Prioritization (RSLCP), that repeatedly achieves the full combination coverage at lower strengths. RSLCP maintains λLCP’s advantages of being static and black box, but avoids the challenge of prioritization strength selection. We performed an empirical study involving five different versions of each of five C programs. Compared with λLCP, and Incremental strength LCP (ILCP), our results show that RSLCP could provide a good trade-off between testing effectiveness and efficiency. Our results also show that RSLCP is more effective and efficient than two popular techniques of Similarity-based Prioritization (SP). In addition, the results of empirical studies also show that RSLCP can remain robust over multiple system releases

    Aggregate-strength interaction test suite prioritization

    Get PDF
    Combinatorial interaction testing is a widely used approach. In testing, it is often assumed that all combinatorial test cases have equal fault detection capability, however it has been shown that the execution order of an interaction test suite's test cases may be critical, especially when the testing resources are limited. To improve testing cost-effectiveness, test cases in the interaction test suite can be prioritized, and one of the best-known categories of prioritization approaches is based on “fixed-strength prioritization”, which prioritizes an interaction test suite by choosing new test cases which have the highest uncovered interaction coverage at a fixed strength (level of interaction among parameters). A drawback of these approaches, however, is that, when selecting each test case, they only consider a fixed strength, not multiple strengths. To overcome this, we propose a new “aggregate-strength prioritization”, to combine interaction coverage at different strengths. Experimental results show that in most cases our method performs better than the test-case-generation, reverse test-case-generation, and random prioritization techniques. The method also usually outperforms “fixed-strength prioritization”, while maintaining a similar time cost
    corecore