1,458 research outputs found

    A Survey on Software Testing Techniques using Genetic Algorithm

    Full text link
    The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.Comment: 13 Page

    Improving Multi-Objective Test Case Selection by Injecting Diversity in Genetic Algorithms

    Get PDF
    A way to reduce the cost of regression testing consists of selecting or prioritizing subsets of test cases from a test suite according to some criteria. Besides greedy algorithms, cost cognizant additional greedy algorithms, multi-objective optimization algorithms, and Multi-Objective Genetic Algorithms (MOGAs), have also been proposed to tackle this problem. However, previous studies have shown that there is no clear winner between greedy and MOGAs, and that their combination does not necessarily produce better results. In this paper we show that the optimality of MOGAs can be significantly improved by diversifying the solutions (sub-sets of the test suite) generated during the search process. Specifically, we introduce a new MOGA, coined as DIV-GA (DIversity based Genetic Algorithm), based on the mechanisms of orthogonal design and orthogonal evolution that increase diversity by injecting new orthogonal individuals during the search process. Results of an empirical study conducted on eleven programs show that DIV-GA outperforms both greedy algorithms and the traditional MOGAs from the optimality point of view. Moreover, the solutions (sub-sets of the test suite) provided by DIV-GA are able to detect more faults than the other algorithms, while keeping the same test execution cost

    Comprehensive review on the state-of- the-arts and solutions to the test redundancy reduction problem with taxonomy

    Get PDF
    The process of software testing is of utmost importance and requires a major allocation of resources. It has a substantial influence on the quality and dependability of software products. Nevertheless, as the quantity of test cases escalates, the feasibility of executing all of them diminishes, and the accompanying expenses related to preparation, execution time, and upkeep grow excessively exorbitant. The objective of Test Redundancy Reduction (TRR) is to mitigate this issue by determining a minimal subset of the test suite that satisfies all the requirements of the primary test suite while lowering the number of test cases. In order to attain this objective, multiple methodologies have been suggested, encompassing heuristics, meta-heuristics, exact algorithms, hybrid approaches, and machine-learning techniques. This work provides a thorough examination of prior research on TRR, addressing deficiencies and making a valuable contribution to the current scholarly understanding. The literature study encompasses a comprehensive examination of the complete chronology of TRR, incorporating all pertinent scholarly articles and practitioner-authored research papers published in English. This study aims to provide managers with valuable insights into the strengths and shortcomings of different TRR methodologies, enabling them to make well-informed decisions regarding the most appropriate approach for their specific needs. The primary objective of this study is to offer a comprehensive analysis of Test Result Reduction (TRR) and its consequential impact on mitigating expenses related to software testing. This study makes a valuable contribution to extant literature by elucidating the present state-of-the-art and delineating potential avenues for future research

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio

    Online Domain Adaptation for Multi-Object Tracking

    Full text link
    Automatically detecting, labeling, and tracking objects in videos depends first and foremost on accurate category-level object detectors. These might, however, not always be available in practice, as acquiring high-quality large scale labeled training datasets is either too costly or impractical for all possible real-world application scenarios. A scalable solution consists in re-using object detectors pre-trained on generic datasets. This work is the first to investigate the problem of on-line domain adaptation of object detectors for causal multi-object tracking (MOT). We propose to alleviate the dataset bias by adapting detectors from category to instances, and back: (i) we jointly learn all target models by adapting them from the pre-trained one, and (ii) we also adapt the pre-trained model on-line. We introduce an on-line multi-task learning algorithm to efficiently share parameters and reduce drift, while gradually improving recall. Our approach is applicable to any linear object detector, and we evaluate both cheap "mini-Fisher Vectors" and expensive "off-the-shelf" ConvNet features. We quantitatively measure the benefit of our domain adaptation strategy on the KITTI tracking benchmark and on a new dataset (PASCAL-to-KITTI) we introduce to study the domain mismatch problem in MOT.Comment: To appear at BMVC 201

    Liner Service Network Design

    Get PDF

    Evolutionary computation for software testing

    Get PDF
    A variety of products undergo a transformation from a pure mechanical design to more and more software and electronic components. A polarized example are watches. Several decades ago they have been purely mechanical. Modern smart watches are almost completely electronic devices which heavily rely on software. Further, a smart watch offers a lot more features than just the information about the current time. This change had a crucial impact on how software is being developed. A first attempt to control the rising complexity was to move to agile development practices such as extreme programming or scrum. This rise in complexity is not only affecting the development process but also quality assurance and software testing. If a product contains more and more features then this leads to a higher number of tests necessary to ensure quality standards. Furthermore agile development practices work in an iterative manner which leads to repetitive testing that puts more effort on the testing team. We aimed within the thesis to ease the pain of testing. Thereby we examined a series of subproblems that arise. A key complexity is the number of test cases. We intended to reduce the number of test cases before they are executed manually or implemented as automated tests. Thereby we examined the test specification and based on the requirements coverage of the individual tests, we were able to identify redundant tests. We relied on a novel metaheuristic called GCAIS which we improved upon iteratively. Another task is to control the remaining complexity. Testing is often time crucial and an appropriate subset of the available tests must be chosen in order to get a quick insight into the status of the device under test. We examined this challenge in two different testing scenarios. The first scenario is located in semi-automated testing where engineers execute a set of automated tests locally and closely observe the behaviour of the system under test. We extended GCAIS to compute test suites that satisfy different criteria if provided with sufficient search time. The second use case is located in fully automated testing in a continuous integration (CI) setting. CI focuses on frequent software build cycles which also include testing. These builds contain a testing stage which greatly emphasizes speed. Thus there we also have to compute crucial tests. However, due to the nature of the process we have to continuously recompute a test suite for each build as the software and maybe even the test cases at hand have changed. Hence it is hard to compute the test suite ahead of time and these tests have to be determined as part of the CI execution. Thus we switched to a computational lightweight learning classifier system (LCS) to prioritize and select test cases. We integrated a series of innovations we made into an LCS known as XCSF such as continuous priorities, experience replay and transfer learning. This enabled us to outperform a state of the art artificial neural network which is used by companies such as Netflix. We further investigated how LCS can be made faster using parallelism. We developed generic approaches which may run on any multicore computing device. This is of interest for our CI use case as the build server's architecture is unknown. However, the methods are also independent of the concrete LCS and are not linked to our testing problem. We identified that many of the challenges that need to be faced in the CI use case have been tackled by Organic Computing (OC), for example the need to adapt to an ever changing environment. Hence we relied on OC design principles to create a system architecture which wraps the LCS developed and integrates it into existing CI processes. The final system is robust and highly autonomous. A side-effect of the high degree of autonomy is a high level of automatization which fits CI well. We also gave insight on the usability and delivery of the full system to our industrial partner. Test engineers can easily integrate it with a few lines of code and need no knowledge about LCS and OC in order to use it. Another implication of the developed system is that OC's ideas and design principles can also be employed outside the field of embedded systems. This shows that OC has a greater level of generality. The process of testing and correcting found errors is still only partially automated. We make a first step into automating the entire process and thereby take an analogy to the concept of self-healing of OC. As a first proof of concept of this school of thought we take a look at touch interfaces. There we can automatically manipulate the software to fulfill the specified behaviour. Thus only a minimalistic amount of manual work is required

    Large Language Model for Multi-objective Evolutionary Optimization

    Full text link
    Multiobjective evolutionary algorithms (MOEAs) are major methods for solving multiobjective optimization problems (MOPs). Many MOEAs have been proposed in the past decades, of which the search operators need a carefully handcrafted design with domain knowledge. Recently, some attempts have been made to replace the manually designed operators in MOEAs with learning-based operators (e.g., neural network models). However, much effort is still required for designing and training such models, and the learned operators might not generalize well on new problems. To tackle the above challenges, this work investigates a novel approach that leverages the powerful large language model (LLM) to design MOEA operators. With proper prompt engineering, we successfully let a general LLM serve as a black-box search operator for decomposition-based MOEA (MOEA/D) in a zero-shot manner. In addition, by learning from the LLM behavior, we further design an explicit white-box operator with randomness and propose a new version of decomposition-based MOEA, termed MOEA/D-LO. Experimental studies on different test benchmarks show that our proposed method can achieve competitive performance with widely used MOEAs. It is also promising to see the operator only learned from a few instances can have robust generalization performance on unseen problems with quite different patterns and settings. The results reveal the potential benefits of using pre-trained LLMs in the design of MOEAs

    Optimization Techniques for Automated Software Test Data Generation

    Get PDF
    Esta tesis propone una variedad de contribuciones al campo de pruebas evolutivas. Hemos abarcados un amplio rango de aspectos relativos a las pruebas de programas: código fuente procedimental y orientado a objetos, paradigmas estructural y funcional, problemas mono-objetivo y multi-objetivo, casos de prueba aislados y secuencias de pruebas, y trabajos teóricos y experimentales. En relación a los análisis llevados a cabo, hemos puesto énfasis en el análisis estadístico de los resultados para evaluar la significancia práctica de los resultados. En resumen, las principales contribuciones de la tesis son: Definición de una nueva medida de distancia para el operador instanceof en programas orientados a objetos: En este trabajo nos hemos centrado en un aspecto relacionado con el software orientado a objetos, la herencia, para proponer algunos enfoques que pueden ayudar a guiar la búsqueda de datos de prueba en el contexto de las pruebas evolutivas. En particular, hemos propuesto una medida de distancia para computar la distancia de ramas en presencia del operador instanceof en programas Java. También hemos propuesto dos operadores de mutación que modifican las soluciones candidatas basadas en la medida de distancia definida. Definición de una nueva medida de complejidad llamada ``Branch Coverage Expectation'': En este trabajo nos enfrentamos a la complejidad de pruebas desde un punto de vista original: un programa es más complejo si es más difícil de probar de forma automática. Consecuentemente, definimos la ``Branch Coverage Expectation'' para proporcionar conocimiento sobre la dificultad de probar programas. La fundación de esta medida se basa en el modelo de Markov del programa. El modelo de Markov proporciona fundamentos teóricos. El análisis de esta medida indica que está más correlacionada con la cobertura de rama que las otras medidas de código estáticas. Esto significa que esto es un buen modo de estimar la dificultad de probar un programa. Predicción teórica del número de casos de prueba necesarios para cubrir un porcentaje concreto de un programa: Nuestro modelo de Markov del programa puede ser usado para proporcionar una estimación del número de casos de prueba necesarios para cubrir un porcentaje concreto del programa. Hemos comparado nuestra predicción teórica con la media de las ejecuciones reales de un generador de datos de prueba. Este modelo puede ayudar a predecir la evolución de la fase de pruebas, la cual consecuentemente puede ahorrar tiempo y coste del proyecto completo. Esta predicción teórica podría ser también muy útil para determinar el porcentaje del programa cubierto dados un número de casos de prueba. Propuesta de enfoques para resolver el problema de generación de datos de prueba multi-objetivo: En ese capítulo estudiamos el problema de la generación multi-objetivo con el fin de analizar el rendimiento de un enfoque directo multi-objetivo frente a la aplicación de un algoritmo mono-objetivo seguido de una selección de casos de prueba. Hemos evaluado cuatro algoritmos multi-objetivo (MOCell, NSGA-II, SPEA2, y PAES) y dos algoritmos mono-objetivo (GA y ES), y dos algoritmos aleatorios. En términos de convergencia hacía el frente de Pareto óptimo, GA y MOCell han sido los mejores resolutores en nuestra comparación. Queremos destacar que el enfoque mono-objetivo, donde se ataca cada rama por separado, es más efectivo cuando el programa tiene un grado de anidamiento alto. Comparativa de diferentes estrategias de priorización en líneas de productos y árboles de clasificación: En el contexto de pruebas funcionales hemos tratado el tema de la priorización de casos de prueba con dos representaciones diferentes, modelos de características que representan líneas de productos software y árboles de clasificación. Hemos comparado cinco enfoques relativos al método de clasificación con árboles y dos relativos a líneas de productos, cuatro de ellos propuestos por nosotros. Los resultados nos indican que las propuestas para ambas representaciones basadas en un algoritmo genético son mejores que el resto en la mayoría de escenarios experimentales, es la mejor opción cuando tenemos restricciones de tiempo o coste. Definición de la extensión del método de clasificación con árbol para la generación de secuencias de pruebas: Hemos definido formalmente esta extensión para la generación de secuencias de pruebas que puede ser útil para la industria y para la comunidad investigadora. Sus beneficios son claros ya que indudablemente el coste de situar el artefacto bajo pruebas en el siguiente estado no es necesario, a la vez que reducimos significativamente el tamaño de la secuencia utilizando técnicas metaheurísticas. Particularmente nuestra propuesta basada en colonias de hormigas es el mejor algoritmo de la comparativa, siendo el único algoritmo que alcanza la cobertura máxima para todos los modelos y tipos de cobertura. Exploración del efecto de diferentes estrategias de seeding en el cálculo de frentes de Pareto óptimos en líneas de productos: Estudiamos el comportamiento de algoritmos clásicos multi-objetivo evolutivos aplicados a las pruebas por pares de líneas de productos. El grupo de algoritmos fue seleccionado para cubrir una amplia y diversa gama de técnicas. Nuestra evaluación indica claramente que las estrategias de seeding ayudan al proceso de búsqueda de forma determinante. Cuanta más información se disponga para crear esta población inicial, mejores serán los resultados obtenidos. Además, gracias al uso de técnicas multi-objetivo podemos proporcionar un conjunto de pruebas adecuado mayor o menor, en resumen, que mejor se adapte a sus restricciones económicas o tecnológicas. Propuesta de técnica exacta para la computación del frente de Pareto óptimo en líneas de productos software: Hemos propuesto un enfoque exacto para este cálculo en el caso multi-objetivo con cobertura paiwise. Definimos un programa lineal 0-1 y un algoritmo basado en resolutores SAT para obtener el frente de Pareto verdadero. La evaluación de los resultados nos indica que, a pesar de ser un fantástico método para el cálculo de soluciones óptimas, tiene el inconveniente de la escalabilidad, ya que para modelos grandes el tiempo de ejecución sube considerablemente. Tras realizar un estudio de correlaciones, confirmamos nuestras sospechas, existe una alta correlación entre el tiempo de ejecución y el número de productos denotado por el modelo de características del programa
    corecore