140 research outputs found

    SOFTWARE FAULT DETECTION VIA GRAMMAR-BASED TEST CASE GENERATION

    Get PDF
    Fault detection is helpful to cut down the failure causes by logically locating and eliminating defects. In this thesis, we present a novel fault detection technique via structured input data which can be represented by a grammar. We take a set of well-distributed test cases as input, each of which has a set of test requirements. We illustrate that test requirements come from structured data can be effectively used as coverage criteria to reduce the test suites. We then propose an automatic fault detection approach to locate software bugs which are shown in failed test cases. This method can be applied in testing data-input-critical software such as compilers, translators, reactive systems etc. Preliminary experimental study proves that our fault detection approach is able to precisely locate the faults of software under test from failed test cases

    REDUNET: reducing test suites by integrating set cover and network-based optimization

    Get PDF
    Abstract The availability of effective test suites is critical for the development and maintenance of reliable software systems. To increase test effectiveness, software developers tend to employ larger and larger test suites. The recent availability of software tools for automatic test generation makes building large test suites affordable, therefore contributing to accelerating this trend. However, large test suites, though more effective, are resources and time consuming and therefore cannot be executed frequently. Reducing them without decreasing code coverage is a needed compromise between efficiency and effectiveness of the test, hence enabling a more regular check of the software under development. We propose a novel approach, namely REDUNET, to reduce a test suite while keeping the same code coverage. We integrate this approach in a complete framework for the automatic generation of efficient and effective test suites, which includes test suite generation, code coverage analysis, and test suite reduction. Our approach formulates the test suite reduction as a set cover problem and applies integer linear programming and a network-based optimisation, which takes advantage of the properties of the control flow graph. We find the optimal set of test cases that keeps the same code coverage in fractions of seconds on real software projects and test suites generated automatically by Randoop. The results on ten real software systems show that the proposed approach finds the optimal minimisation and achieves up to 90% reduction and more than 50% reduction on all systems under analysis. On the largest project our reduction algorithm performs more than three times faster than both integer linear programming alone and the state-of-the-art heuristic Harrold Gupta Soffa

    Comprehensive review on the state-of- the-arts and solutions to the test redundancy reduction problem with taxonomy

    Get PDF
    The process of software testing is of utmost importance and requires a major allocation of resources. It has a substantial influence on the quality and dependability of software products. Nevertheless, as the quantity of test cases escalates, the feasibility of executing all of them diminishes, and the accompanying expenses related to preparation, execution time, and upkeep grow excessively exorbitant. The objective of Test Redundancy Reduction (TRR) is to mitigate this issue by determining a minimal subset of the test suite that satisfies all the requirements of the primary test suite while lowering the number of test cases. In order to attain this objective, multiple methodologies have been suggested, encompassing heuristics, meta-heuristics, exact algorithms, hybrid approaches, and machine-learning techniques. This work provides a thorough examination of prior research on TRR, addressing deficiencies and making a valuable contribution to the current scholarly understanding. The literature study encompasses a comprehensive examination of the complete chronology of TRR, incorporating all pertinent scholarly articles and practitioner-authored research papers published in English. This study aims to provide managers with valuable insights into the strengths and shortcomings of different TRR methodologies, enabling them to make well-informed decisions regarding the most appropriate approach for their specific needs. The primary objective of this study is to offer a comprehensive analysis of Test Result Reduction (TRR) and its consequential impact on mitigating expenses related to software testing. This study makes a valuable contribution to extant literature by elucidating the present state-of-the-art and delineating potential avenues for future research

    Automated Software Testing of Relational Database Schemas

    Get PDF
    Relational databases are critical for many software systems, holding the most valuable data for organisations. Data engineers build relational databases using schemas to specify the structure of the data within a database and defining integrity constraints. These constraints protect the data's consistency and coherency, leading industry experts to recommend testing them. Since manual schema testing is labour-intensive and error-prone, automated techniques enable the generation of test data. Although these generators are well-established and effective, they use default values and often produce many, long, and similar tests --- this results in decreasing fault detection and increasing regression testing time and testers inspection efforts. It raises the following questions: How effective is the optimised random generator at generating tests and its fault detection compared to prior methods? What factors make tests understandable for testers? How to reduce tests while maintaining effectiveness? How effectively do testers inspect differently reduced tests? To answer these questions, the first contribution of this thesis is to evaluate a new optimised random generator against well-established methods empirically. Secondly, identifying understandability factors of schema tests using a human study. Thirdly, evaluating a novel approach that reduces and merge tests against traditional reduction methods. Finally, studying testers' inspection efforts with differently reduced tests using a human study. The results show that the optimised random method efficiently generates effective tests compared to well-established methods. Testers reported that many NULLs and negative numbers are confusing, and they prefer simple repetition of unimportant values and readable strings. The reduction technique with merging is the most effective at minimising tests and producing efficient tests while maintaining effectiveness compared to traditional methods. The merged tests showed an increase in inspection efficiency with a slight accuracy decrease compared to only reduced tests. Therefore, these techniques and investigations can help practitioners adopt these generators in practice

    On the influence of test adequacy criteria on test suite reduction for model-based testing of real-time systems.

    Get PDF
    O teste baseado em modelos é uma abordagem de teste de software que usa modelos abstratos de uma aplicação para gerar, executar e avaliar os testes. A geração de casos de testes exerce um papel importante no teste baseado em modelos. Como essa geração consiste na busca sistemática por casos de testes que possam ser extraídos dos modelos, o teste baseado em modelos geralmente produz suítes de testes que são caras demais para serem executadas completamente. Técnicas de redução de suítes de testes têm sido propostas para abordar este problema. O objetivo dessas técnicas é obter suítes de testes reduzidas que são mais baratas de serem executadas e tão efetivas na detecção de faltas quanto as suítes completas, dado que as suítes reduzidas mantém o mesmo nível de cobertura, definido por um critério de adequação de testes, da suíte completa. Esses critérios definem que partes do sistema serão testados, com que frequência e sob quais circunstâncias. Entretanto, pouca atenção tem sido dada ao impacto que a escolha do critério tem na redução de suítes de testes. Por outro lado, sistemas de tempo-real são sistemas reativos cujos comportamentos são restringidos pelo tempo. Consequentemente, faltas relacionadas ao tempo são específicas desses sistemas. Para lidar com isso, modelos para sistemas de tempo real devem trabalhar com tempo e, consequentemente, há critérios de adequação de testes específicos para eles. Contudo, a pesquisa sobre redução de suítes de testes não tem focado em sistemas de tempo-real, portanto o impacto de critérios de adequação de testes na redução de suítes é desconhecido. Nesta pesquisa de doutorado objetivamos investigar a influência de critérios de adequação de testes nos resultados da redução de suítes de testes no contexto de teste baseado em modelos de sistemas de tempo-real. Em particular, nós estamos interessados no modelo Timed Input-Output Symbolic Transition Systems (TIOSTS), porque ele é um modelo de sistema de transições no qual dados e tempo são definidos simbolicamente, já que sistemas de transição são a base para o teste de conformidade de sistemas de tempo real. Para alcançar o objetivo da pesquisa, primeiramente, nós definimos 19 critérios de adequação de testes para o modelo TIOSTS. Os critérios definidos incluem critérios baseados em transições, fluxo de dados e tempo. Depois nós formalizamos uma hierarquia com esses critérios, onde eles estão parcialmente ordenados pela relação de inclusão estrita. Segundamente, nós avaliamos empiricamente o custo-benefício de doze dos critérios definidos e cinco técnicas de redução de suítes de testes. Nós avaliamos o tamanho, o tempo de execução e a detecção de faltas das suítes de testes reduzidas de cada uma das 60 combinações de critério e técnica. No experimento, nós usamos modelos de especificação, em TIOSTS, de uma máquina de recarga de cartão do metrô, de um sistema de alarme anti-roubo e de um limitador automático de velocidade de carros. Além disso, usamos simulações das implementações, que geram rastros corretos para os modelos. Por fim, o teste de mutação foi usado para gerar mutantes dos modelos de especificação, que, por sua vez, foram traduzidos para simulações com a finalidade de simular modelos de implementações defeituosas. As evidências empíricas sugerem que os critérios de adequação de testes mais próximos do topo da hierarquia produziram suítes reduzidas com melhor custo-benefício com relação à detecção de faltas e tempo de execução. Com relação às técnicas de redução, a técnica aleatória obteve melhor custo-benefício dentre as técnicas avaliadas. Os resultados apontam que os critérios explicam mais a variação nos resultados do que as técnicas.Model-based testing is a testing approach that relies on the existence of abstract models of an application to generate, execute and evaluate tests. Test case generation plays an important role in model-based testing. Since it consists of a systematic search for test cases that can be extracted from models, model-based testing usually generates large test suites which are too expensive to execute in full. Test suite reduction techniques have been proposed to address this problem. The goal of the techniques is to obtain reduced test suites that are both cheaper to execute and as effective at detecting faults as the original suite, given that the reduced test suites maintain the same coverage level of the complete test suite required by a test adequacy criterion. These criteria define which parts of the system are going to be tested, how often and under what circumstances. Nevertheless, little attention has been paid to the impact of the criterion choice in test suite reduction research. On the other hand, real-time systems are reactive systems whose behavior is constrained by time. Consequently, time-related faults are specific to these systems. In order to cope with this issue, models for real-time systems must deal with time and, consequently, there are specific test adequacy criteria for them. However, test suite reduction research has not focused on real-time systems, therefore the impact of test adequacy criteria for models of real-time systems on test suite reduction is unknown. In this doctoral research, we aim at investigating the influence of test adequacy criteria on the outcomes of test suite reduction techniques in the context of model-based testing of real-time systems. In particular, we are interested in the Timed Input-Output Symbolic Transition Systems (TIOSTS) model because it is an expressive transition system in which data and time are symbolically defined, and transition systems are the basis for conformance testing of real-time systems. In order to achieve the research objective, first, we defined 19 test adequacy criteria for TIOSTS models. The defined criteria include transition-based criteria, data-flow-oriented criteria and real-time systems criteria. Next, we formalized a hierarchy with these criteria which is partially ordered by the strict inclusion relation. Second, we evaluated the cost-effectiveness of twelve criteria and five test suite reduction techniques in empirical studies of test suite reduction. We evaluated the size, execution time and fault detection of reduced test suites obtained from each combination of criterion and technique. In the experiment, we used TIOSTS specification models of a refilling machine for charging the subway card, a burglar alarm system, and an automated car speed limiter; simulations of the implementations, which generate correct traces for the models; and mutation testing to generate mutants of the specification models, which were also translated to simulations in order to simulate faulty model implementations. Empirical evidence suggests that test adequacy criteria closer to the top of the family obtained reduced test suites with better costeffectiveness regarding fault detection and execution time. With respect to the test suite reduction techniques, the Random technique obtained better cost-effectiveness among the evaluated criteria. Results also suggests that the criteria explain more the variation in fault detection and execution time of reduced test suites than the techniques

    Improving regression testing efficiency and reliability via test-suite transformations

    Get PDF
    As software becomes more important and ubiquitous, high quality software also becomes crucial. Developers constantly make changes to improve software, and they rely on regression testing—the process of running tests after every change—to ensure that changes do not break existing functionality. Regression testing is widely used both in industry and in open source, but it suffers from two main challenges. (1) Regression testing is costly. Developers run a large number of tests in the test suite after every change, and changes happen very frequently. The cost is both in the time developers spend waiting for the tests to finish running so that developers know whether the changes break existing functionality, and in the monetary cost of running the tests on machines. (2) Regression test suites contain flaky tests, which nondeterministically pass or fail when run on the same version of code, regardless of any changes. Flaky test failures can mislead developers into believing that their changes break existing functionality, even though those tests can fail without any changes. Developers will therefore waste time trying to debug non existent faults in their changes. This dissertation proposes three lines of work that address these challenges of regression testing through test-suite transformations that modify test suites to make them more efficient or more reliable. Specifically, two lines of work explore how to reduce the cost of regression testing and one line of work explores how to fix existing flaky tests. First, this dissertation investigates the effectiveness of test-suite reduction (TSR), a traditional test-suite transformation that removes tests deemed redundant with respect to other tests in the test suite based on heuristics. TSR outputs a smaller, reduced test suite to be run in the future. However, TSR risks removing tests that can potentially detect faults in future changes. While TSR was proposed over two decades ago, it was always evaluated using program versions with seeded faults. Such evaluations do not precisely predict the effectiveness of the reduced test suite on the future changes. This dissertation evaluates TSR in a real-world setting using real software evolution with real test failures. The results show that TSR techniques proposed in the past are not as effective as suggested by traditional TSR metrics, and those same metrics do not predict how effective a reduced test suite is in the future. Researchers need to either propose new TSR techniques that produce more effective reduced test suites or better metrics for predicting the effectiveness of reduced test suites. Second, this dissertation proposes a new transformation to improve regression testing cost when using a modern build system by optimizing the placement of tests, implemented in a technique called TestOptimizer. Modern build systems treat a software project as a group of inter-dependent modules, including test modules that contain only tests. As such, when developers make a change, the build system can use a developer-specified dependency graph among modules to determine which test modules are affected by any changed modules and to run only tests in the affected test modules. However, wasteful test executions are a problem when using build systems this way. Suboptimal placements of tests, where developers may place some tests in a module that has more dependencies than the test actually needs, lead to running more tests than necessary after a change. TestOptimizer analyzes a project and proposes moving tests to reduce the number of test executions that are triggered over time due to developer changes. Evaluation of TestOptimizer on five large proprietary projects at Microsoft shows that the suggested test movements can reduce 21.7 million test executions (17.1%) across all evaluation projects. Developers accepted and intend to implement 84.4% of the reported suggestions. Third, to make regression testing more reliable, this dissertation proposes iFixFlakies, a framework for fixing a prominent kind of flaky tests: order dependent tests. Order-dependent tests pass or fail depending on the order in which the tests are run. Intuitively, order-dependent tests fail either because they need another test to set up the state for them to pass, or because some other test pollutes the state before they are run, and the polluted state makes them fail. The key insight behind iFixFlakies is that test suites often already have tests, which we call helpers, that contain the logic for setting/resetting the state needed for order-dependent tests to pass. iFixFlakies searches a test suite for these helpers and then recommends patches for order-dependent tests using code from the helpers. Evaluation of iFixFlakies on 137 truly order-dependent tests from a public dataset shows that 81 of them have helpers, and iFixFlakies can fix all 81. Furthermore, among our GitHub pull requests for 78 of these order dependent tests (3 of 81 had been already fixed), developers accepted 38; the remaining ones are still pending, and none are rejected so far

    Remediation simulation/optimization demonstrations

    Get PDF
    Applications of simulation/optimization (S/0) software to develop contamination remediation strategies include formal remediation optimization using heads and gradients (hydraulics-based) and concentrations (risk-based) constraints. The six reported cases involve pump and treat systems, or pump, treat and re-inject systems, together termed PAT systems. We used S/0 modeling to perform hydraulic optimization for two of the sites and transport optimization for four. For four of the six sites, other parties used normal simulation (S) modeling alone to develop pumping strategies. Comparing the S/0 model-developed strategies with the S model-developed strategies showed S/0 modeling benefits ranging up to: (a) 25 percent reduction in construction cost and 20 percent reduction in O&M costs; (b) 160 percent increase in mass removal; and (c) 62.5 percent reduction in number of extraction wells, and (d) 25 percent reduction in new wells with six percent increase in mass removal. The earlier that S/0 modeling was used in the design process, and the more freedom given to the S/0 model, the greater the benefit

    Empirical Mathematical Model of Microprocessor Sensitivity and Early Prediction to Proton and Neutron Radiation-Induced Soft Errors

    Get PDF
    A mathematical model is described to predict microprocessor fault tolerance under radiation. The model is empirically trained by combining data from simulated fault-injection campaigns and radiation experiments, both with protons (at the National Center of Accelerators (CNA) facilities, Seville, Spain) and neutrons [at the Los Alamos Neutron Science Center (LANSCE) Weapons Neutron Research Facility at Los Alamos, USA]. The sensitivity to soft errors of different blocks of commercial processors is identified to estimate the reliability of a set of programs that had previously been optimized, hardened, or both. The results showed a standard error under 0.1, in the case of the Advanced RISC Machines (ARM) processor, and 0.12, in the case of the MSP430 microcontroller.This work was supported in part by Spanish MINECO under Project ESP-2015-68245-C4-3-P and Project ESP-2015-68245-C4-4-P

    Systolic genetic search, a parallel metaheuristic for GPUs

    Get PDF
    La utilización de unidades de procesamiento gráfico (GPUs) para la resolución de problemas de propósito general ha experimentado un crecimiento vertiginoso en los últimos años, sustentado en su amplia disponibilidad, su bajo costo económico y en contar con una arquitectura inherentemente paralela, así como en la aparición de lenguajes de programación de propósito general que han facilitado el desarrollo de aplicaciones en estas plataformas. En este contexto, el diseño de nuevos algoritmos paralelos que puedan beneficiarse del uso de GPUs es una línea de investigación prometedora e interesante. Las metaheurísticas son algoritmos estocásticos capaces de encontrar soluciones muy precisas (muchas veces óptimas) a problemas de optimización en un tiempo razonable. Sin embargo, como muchos problemas de optimización involucran tareas que exigen grandes recursos computacionales y/o el tamaño de las instancias que se están abordando actualmente se están volviendo muy grandes, incluso las metaheurísticas pueden ser computacionalmente muy costosas. En este escenario, el paralelismo surge como una alternativa exitosa con el fin de acelerar la búsqueda de este tipo de algoritmos. Además de permitir reducir el tiempo de ejecución de los algoritmos, las metaheurísticas paralelas a menudo son capaces de mejorar la calidad de los resultados obtenidos por los algoritmos secuenciales tradicionales.Si bien el uso de GPUs ha representado un dominio inspirador también para la investigación en metaheurísticas paralelas, la mayoría de los trabajos previos tenían como objetivo portar una familia existente de algoritmos a este nuevo tipo de hardware. Como consecuencia, muchas publicaciones están dirigidas a mostrar el ahorro en tiempo de ejecución que se puede lograr al ejecutar los diferentes tipos paralelos de metaheurísticas existentes en GPU. En otras palabras, a pesar de que existe un volumen considerable de trabajo sobre este tópico, se han propuesto pocas ideas novedosas que busquen diseñar nuevos algoritmos y/o modelos de paralelismo que exploten explícitamente el alto grado de paralelismo disponible en las arquitecturas de las GPUs. Esta tesis aborda el diseño de una propuesta innovadora de algoritmo de optimización paralelo denominada Búsqueda Genética Sistólica (SGS), que combina ideas de los campos de metaheurísticas y computación sistólica. SGS, así como la computación sistólica, se inspiran en el mismo fenómeno biológico: la contracción sistólica del corazón que hace posible la circulación de la sangre. En SGS, las soluciones circulan de forma síncrona a través de una grilla (rejilla) de celdas. Cuando dos soluciones se encuentran en una celda se aplican operadores evolutivos adaptados para generar nuevas soluciones que continúan moviéndose a través de la grilla (rejilla). La implementación de esta nueva propuesta saca partido especialmente de las características específicas de las GPUs. Un extenso análisis experimental que considera varios problemas de benchmark clásicos y dos problemas del mundo real del área de Ingeniería de Software, muestra que el nuevo algoritmo propuesto es muy efectivo, encontrando soluciones óptimas o casi óptimas en tiempos de ejecución cortos. Además, los resultados numéricos obtenidos por SGS son competitivos con los resultados del estado del arte para los dos problemas del mundo real en cuestión. Por otro lado, la implementación paralela en GPU de SGS ha logrado un alto rendimiento, obteniendo grandes reducciones de tiempo de ejecución con respecto a la implementación secuencial y mostrando que escala adecuadamente cuando se consideran instancias de tamaño creciente. También se ha realizado un análisis teórico de las capacidades de búsqueda de SGS para comprender cómo algunos aspectos del diseño del algoritmo afectan a sus resultados numéricos. Este análisis arroja luz sobre algunos aspectos del funcionamiento de SGS que pueden utilizarse para mejorar el diseño del algoritmo en futuras variantes

    Multifactor Algorithm for Test Case Selection and Ordering

    Get PDF
    Regression testing being expensive, requires optimization notion. Typically, the optimization of test cases results in selecting a reduced set or subset of test cases or prioritizing the test cases to detect potential faults at an earlier phase. Many former studies revealed the heuristic-dependent mechanism to attain optimality while reducing or prioritizing test cases. Nevertheless, those studies were deprived of systematic procedures to manage tied test cases issue. Moreover, evolutionary algorithms such as the genetic process often help in depleting test cases, together with a concurrent decrease in computational runtime. However, when examining the fault detection capacity along with other parameters, is required, the method falls short. The current research is motivated by this concept and proposes a multifactor algorithm incorporated with genetic operators and powerful features. A factor-based prioritizer is introduced for proper handling of tied test cases that emerged while implementing re-ordering. Besides this, a Cost-based Fine Tuner (CFT) is embedded in the study to reveal the stable test cases for processing. The effectiveness of the outcome procured through the proposed minimization approach is anatomized and compared with a specific heuristic method (rule-based) and standard genetic methodology. Intra-validation for the result achieved from the reduction procedure is performed graphically. This study contrasts randomly generated sequences with procured re-ordered test sequence for over '10' benchmark codes for the proposed prioritization scheme. Experimental analysis divulged that the proposed system significantly managed to achieve a reduction of 35-40% in testing effort by identifying and executing stable and coverage efficacious test cases at an earlier phase
    corecore