17 research outputs found

    A survey on test suite reduction frameworks and tools

    No full text
    Software testing is a widely accepted practice that ensures the quality of a System under Test (SUT). However, the gradual increase of the test suite size demands high portion of testing budget and time. Test Suite Reduction (TSR) is considered a potential approach to deal with the test suite size problem. Moreover, a complete automation support is highly recommended for software testing to adequately meet the challenges of a resource constrained testing environment. Several TSR frameworks and tools have been proposed to efficiently address the test-suite size problem. The main objective of the paper is to comprehensively review the state-of-the-art TSR frameworks to highlights their strengths and weaknesses. Furthermore, the paper focuses on devising a detailed thematic taxonomy to classify existing literature that helps in understanding the underlying issues and proof of concept. Moreover, the paper investigates critical aspects and related features of TSR frameworks and tools based on a set of defined parameters. We also rigorously elaborated various testing domains and approaches followed by the extant TSR frameworks. The results reveal that majority of TSR frameworks focused on randomized unit testing, and a considerable number of frameworks lacks in supporting multi-objective optimization problems. Moreover, there is no generalized framework, effective for testing applications developed in any programming domain. Conversely, Integer Linear Programming (ILP) based TSR frameworks provide an optimal solution for multi-objective optimization problems and improve execution time by running multiple ILP in parallel. The study concludes with new insights and provides an unbiased view of the state-of-the-art TSR frameworks. Finally, we present potential research issues for further investigation to anticipate efficient TSR frameworks

    Applying TSR techniques over large test suites

    Get PDF
    Tese de mestrado em Engenharia Informática (Engenharia de Software) Universidade de Lisboa, Faculdade de Ciências, 2020Com o aumento da necessidade de garantir a qualidade do software criado atualmente, tem sido cada vez mais indispensável dedicar parte do tempo de desenvolvimento (por vezes mais de metade) a testar o código desenvolvido. Apesar da elevada importˆancia desta tarefa, é uma atividade que algumas vezes é ignorada ou negligenciada, devido a ser um trabalho por vezes monótono e cansativo. Esta é uma área bastante explorada por investigadores que tentam, de diversas maneiras, automatizar algum deste processo e, também, reduzir o tempo e recursos necessários para efetuar esta tarefa. Nomeadamente, considera-se a proposta de técnicas de redução de testes e ferramentas que as implementem, com o objetivo de agilizar esta tarefa, eliminando casos de testes desnecessários sem comprometer a cobertura dada pelo conjunto de testes original, bem como técnicas de prioritização de testes, que reorganizam conjuntos de testes com o objetivo de executar os mais relevantes (de acordo com um determinado critério) primeiramente, aumentando assim a sua capacidade de atingir algum objetivo. Neste contexto, destacam-se os testes de regressão, que permitem testar alterações ao software, verificando se as funcionalidades antigas continuam a funcionar com as alterações feitas. Salientam-se estes testes na medida em que são considerados o tipo de testes de software mais cansativo, comportando elevados custos e não evidenciando ganho claros a curto-prazo e, consequentemente, beneficiando particularmente das técnicas descritas anteriormente. Com o surgimento destas técnicas, é inevitável surgirem algumas comparações, tentando escolher as que melhor se adequam a diferentes necessidades. O objetivo deste trabalho consiste em dar resposta a duas questões, particularmente interessantes dado o estado de arte atual: “Qual a melhor ferramenta de redução de testes?”(de acordo com parâmetros pré-definidos) e “Se uma ferramenta de prioritização de testes for modificada, pode substituir de forma eficiente uma ferramenta de redução de testes?”. Para realizar a presente dissertação, foi estudado um grupo de ferramentas de redução de testes, de forma a ter uma melhor noção do estado de arte atual. Apesar de inicialmente terem sido encontradas onze ferramentas que poderiam vir a ser usadas com este propósito, os testes realizados, assim como as propriedades de algumas ferramentas, restringiram a utilização da maioria delas, Assim, foram consideradas três ferramentas: Evo- Suite, Testler e Randoop. De forma a tornar o objetivo deste trabalho mais enriquecido, foi também estudada uma ferramenta de prioritização de testes, Kanonizo. Para respondermos às questões apresentadas, foi desenvolvida uma ferramenta que integra o conjunto de ferramentas de redução de testes seleccionado e que, dado um conjunto de projetos open-source, aplica as técnicas destas ferramentas a cada um destes, efetuando assim a redução dos seus testes. Seguidamente, a ferramenta desenvolvida utiliza a ferramenta de prioritização Kanonizo para obter uma lista dos vários testes, ordenados consoante a sua importância; finalmente, são eliminados os testes menos importantes, segundo um valor pré-definido. De seguida, utilizando uma ferramenta de análise de código, neste caso a OpenClover, são recolhidas métricas referentes a cada conjunto de testes, para os projetos originais e igualmente para cada um dos reduzidos. Estas são depois utilizadas para avaliar a eficácia da redução para cada ferramenta. A eficácia da redução pode ser influenciada por diversos fatores. No caso da nossa ferramenta, foram considerados o tamanho do ficheiro de testes, o número de testes, o tempo de execução dos mesmos, a percentagem de cobertura total do código e a percentagem de cobertura de “ramos”no código. Por este motivo, decidiu-se adotar uma metodologia de Multi-Criteria Decision-Making, mais especificamente a Analytic Hierarchy Process que, segundo diversos critérios e valores de importância definidos entre eles, deduz a prioridade que cada critério deve ter ao tentar atingir um objetivo comum, isto é, que peso tem cada critério no cálculo da pontuação de cada ferramenta. Após a finalização e aperfeiçoamento da ferramenta, foram realizadas experiências que nos permitiram analisar a eficácia de cada ferramenta. Dada a facilidade de configuração da ferramenta, foram efetuadas diversas análises às reduções efetuadas pelas ferramentas, alterando a importância de cada critério considerado, visando verificar de que maneira estes influenciavam a escolha da melhor ferramenta. As pontuações de cada ferramenta de redução foram calculadas para seis cenários diferentes: um que replicasse as necessidades do ”mundo real”, dando portanto mais importância ao tempo de execução dos testes e cobertura atingida do que à dimensão dos testes em si; e um focando toda a importância para cada um dos sub-critérios definidos. Cada um destes cenários foi feito duas vezes - uma com os testes gerados pelo EvoSuite, e outra com os testes já existentes - com o objetivo de averiguar se as ferramentas de redução teriam um comportamento semelhante na presença de testes gerados automaticamente ou por humanos. Ignorando a ferramenta de prioritização, e tendo em conta o facto de que a EvoSuite só poderia ser usada com testes gerados por si mesma, a ferramenta de redução que teve uma melhor pontuação foi a Testler, o que responde à nossa primeira questão do estudo. Quanto à segunda questão, apesar dos resultados terem mostrado indubitavelmente que a Kanonizo obteve pontuações melhores que as outras ferramentas analisadas, esta é uma questão susceptível à influência de diversos fatores. No contexto das experiências efetuadas, podemos dizer que a Kanonizo pode substituir qualquer uma das ferramentas de redução e, confiando nas pontuações, fazer um trabalho mais eficaz. No entanto, num contexto mais abrangente, torna-se difícil responder a esta questão sem considerar um número mais elevado de ferramentas de redução de testes. Para mais, dependendo do algoritmo utilizado na ferramenta de prioritização, a redução feita por nós pode não ter qualquer critério (no caso de algoritmos random), o que faria essencialmente com que estivéssemos a apagar métodos aleatoriamente, podendo causar grandes perdas em termos de cobertura de código. Quando falamos de “substituir”uma ferramenta de redução por uma de prioritização com uma abordagem semelhante à nossa, é melhor utilizar algoritmos de prioritização que visem algum critério, como dar prioridade a testes que cubram, por exemplo, linhas de código ainda não cobertas, por exemplo, aumentando assim a probabilidade dos testes menos importantes serem redundantes e, consequentemente, apagados. Esta ferramenta foi desenvolvida de modo a que fosse facilmente modificável e expansível, oferecendo assim uma maneira fácil para integrar novas ferramentas de redução de testes, permitindo realizar novas comparações à medida que estas ferramentas forem surgindo. Para além disso, destaca-se também a simplicidade da configuração dos critérios a ter em conta quando calculamos a pontuação de cada ferramenta, e o valor que estes têm em comparação com os outros, possibilitando facilmente o cálculo de pontuações tendo em conta diversos cenários.Ao longo deste trabalho, o maior problema encontrado foi o estudo das ferramentas de redução. Para além de algumas não serem open-source, muitas das que eram não iam de encontro aos nossos requisitos para integração na ferramenta, quer por linguagem de programação, quer por abordagem seguida, e muitas vezes a documentação encontrada estava desatualizada ou errada, dificultando todo o processo.With the growing need to assure the quality of the software created nowadays, it has become increasingly necessary to devote part of the development time (sometimes as much as half) to testing the developed code. Despite the high importance of this task, it is an activity that is sometimes ignored and neglected, due to it being, occasionally, a monotonous and tiring job. This is an area thoroughly investigated by researchers who try to automate some parts of this process, while also reducing the time and resources required for this task. In particular, we highlight the proposal of test reduction techniques and tools that implement them, with the goal of detecting unnecessary test cases without compromising the coverage given by the original test suite. The main objective of this work consists in answering two questions: “What is the best Test Suite Reduction tool?” (according to some criterion) and “Can a Test Case Prioritization tool be adapted to effectively replace a Test Suite Reduction tool?”. To answer these questions, we developed a framework that integrates a set of Test Suite Reduction and Test Cases Prioritization tools, with the possibility to integrate more, and that uses them, given a set of open source Java projects, to apply each of its techniques. We then gather test execution data about these projects and compare them to compute a score for each Test Suite Reduction tool, which is used to rank them. This score is achieved by applying a Multi-Criteria Decision-Making methodology, the Analytic Hierarchy Process, to weigh several chosen code metrics on the final score. To answer the second question we integrated a Test Case Prioritization tool into our framework, with which we will get a list of the less important test cases, that we will then remove, simulating a reduction

    Doctor of Philosophy

    Get PDF
    dissertationCompilers are indispensable tools to developers. We expect them to be correct. However, compiler correctness is very hard to be reasoned about. This can be partly explained by the daunting complexity of compilers. In this dissertation, I will explain how we constructed a random program generator, Csmith, and used it to find hundreds of bugs in strong open source compilers such as the GNU Compiler Collection (GCC) and the LLVM Compiler Infrastructure (LLVM). The success of Csmith depends on its ability of being expressive and unambiguous at the same time. Csmith is composed of a code generator and a GTAV (Generation-Time Analysis and Validation) engine. They work interactively to produce expressive yet unambiguous random programs. The expressiveness of Csmith is attributed to the code generator, while the unambiguity is assured by GTAV. GTAV performs program analyses, such as points-to analysis and effect analysis, efficiently to avoid ambiguities caused by undefined behaviors or unspecifed behaviors. During our 4.25 years of testing, Csmith has found over 450 bugs in the GNU Compiler Collection (GCC) and the LLVM Compiler Infrastructure (LLVM). We analyzed the bugs by putting them into different categories, studying the root causes, finding their locations in compilers' source code, and evaluating their importance. We believe analysis results are useful to future random testers, as well as compiler writers/users

    XARK: an extensible framework for automatic recognition of computational kernels

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in ACM Transactions on Programming Languages and Systems. The final authenticated version is available online at: http://dx.doi.org/10.1145/1391956.1391959[Abstract] The recognition of program constructs that are frequently used by software developers is a powerful mechanism for optimizing and parallelizing compilers to improve the performance of the object code. The development of techniques for automatic recognition of computational kernels such as inductions, reductions and array recurrences has been an intensive research area in the scope of compiler technology during the 90's. This article presents a new compiler framework that, unlike previous techniques that focus on specific and isolated kernels, recognizes a comprehensive collection of computational kernels that appear frequently in full-scale real applications. The XARK compiler operates on top of the Gated Single Assignment (GSA) form of a high-level intermediate representation (IR) of the source code. Recognition is carried out through a demand-driven analysis of this high-level IR at two different levels. First, the dependences between the statements that compose the strongly connected components (SCCs) of the data-dependence graph of the GSA form are analyzed. As a result of this intra-SCC analysis, the computational kernels corresponding to the execution of the statements of the SCCs are recognized. Second, the dependences between statements of different SCCs are examined in order to recognize more complex kernels that result from combining simpler kernels in the same code. Overall, the XARK compiler builds a hierarchical representation of the source code as kernels and dependence relationships between those kernels. This article describes in detail the collection of computational kernels recognized by the XARK compiler. Besides, the internals of the recognition algorithms are presented. The design of the algorithms enables to extend the recognition capabilities of XARK to cope with new kernels, and provides an advanced symbolic analysis framework to run other compiler techniques on demand. Finally, extensive experiments showing the effectiveness of XARK for a collection of benchmarks from different application domains are presented. In particular, the SparsKit-II library for the manipulation of sparse matrices, the Perfect benchmarks, the SPEC CPU2000 collection and the PLTMG package for solving elliptic partial differential equations are analyzed in detail.Ministeiro de Educación y Ciencia; TIN2004-07797-C02Ministeiro de Educación y Ciencia; TIN2007-67537-C03Xunta de Galicia; PGIDIT05PXIC10504PNXunta de Galicia; PGIDIT06PXIB105228P

    Increasing the efficacy of automated instruction set extension

    Get PDF
    The use of Instruction Set Extension (ISE) in customising embedded processors for a specific application has been studied extensively in recent years. The addition of a set of complex arithmetic instructions to a baseline core has proven to be a cost-effective means of meeting design performance requirements. This thesis proposes and evaluates a reconfigurable ISE implementation called “Configurable Flow Accelerators” (CFAs), a number of refinements to an existing Automated ISE (AISE) algorithm called “ISEGEN”, and the effects of source form on AISE. The CFA is demonstrated repeatedly to be a cost-effective design for ISE implementation. A temporal partitioning algorithm called “staggering” is proposed and demonstrated on average to reduce the area of CFA implementation by 37% for only an 8% reduction in acceleration. This thesis then turns to concerns within the ISEGEN AISE algorithm. A methodology for finding a good static heuristic weighting vector for ISEGEN is proposed and demonstrated. Up to 100% of merit is shown to be lost or gained through the choice of vector. ISEGEN early-termination is introduced and shown to improve the runtime of the algorithm by up to 7.26x, and 5.82x on average. An extension to the ISEGEN heuristic to account for pipelining is proposed and evaluated, increasing acceleration by up to an additional 1.5x. An energyaware heuristic is added to ISEGEN, which reduces the energy used by a CFA implementation of a set of ISEs by an average of 1.6x, up to 3.6x. This result directly contradicts the frequently espoused notion that “bigger is better” in ISE. The last stretch of work in this thesis is concerned with source-level transformation: the effect of changing the representation of the application on the quality of the combined hardwaresoftware solution. A methodology for combined exploration of source transformation and ISE is presented, and demonstrated to improve the acceleration of the result by an average of 35% versus ISE alone. Floating point is demonstrated to perform worse than fixed point, for all design concerns and applications studied here, regardless of ISEs employed

    Efficient cross-architecture hardware virtualisation

    Get PDF
    Hardware virtualisation is the provision of an isolated virtual environment that represents real physical hardware. It enables operating systems, or other system-level software (the guest), to run unmodified in a “container” (the virtual machine) that is isolated from the real machine (the host). There are many use-cases for hardware virtualisation that span a wide-range of end-users. For example, home-users wanting to run multiple operating systems side-by-side (such as running a Windows® operating system inside an OS X environment) will use virtualisation to accomplish this. In research and development environments, developers building experimental software and hardware want to prototype their designs quickly, and so will virtualise the platform they are targeting to isolate it from their development workstation. Large-scale computing environments employ virtualisation to consolidate hardware, enforce application isolation, migrate existing servers or provision new servers. However, the majority of these use-cases call for same-architecture virtualisation, where the architecture of the guest and the host machines match—a situation that can be accelerated by the hardware-assisted virtualisation extensions present on modern processors. But, there is significant interest in virtualising the hardware of different architectures on a host machine, especially in the architectural research and development worlds. Typically, the instruction set architecture of a guest platform will be different to the host machine, e.g. an ARM guest on an x86 host will use an ARM instruction set, whereas the host will be using the x86 instruction set. Therefore, to enable this cross-architecture virtualisation, each guest instruction must be emulated by the host CPU—a potentially costly operation. This thesis presents a range of techniques for accelerating this instruction emulation, improving over a state-of-the art instruction set simulator by 2:64x. But, emulation of the guest platform’s instruction set is not enough for full hardware virtualisation. In fact, this is just one challenge in a range of issues that must be considered. Specifically, another challenge is efficiently handling the way external interrupts are managed by the virtualisation system. This thesis shows that when employing efficient instruction emulation techniques, it is not feasible to arbitrarily divert control-flow without consideration being given to the state of the emulated processor. Furthermore, it is shown that it is possible for the virtualisation environment to behave incorrectly if particular care is not given to the point at which control-flow is allowed to diverge. To solve this, a technique is developed that maintains efficient instruction emulation, and correctly handles external interrupt sources. Finally, modern processors have built-in support for hardware virtualisation in the form of instruction set extensions that enable the creation of an abstract computing environment, indistinguishable from real hardware. These extensions enable guest operating systems to run directly on the physical processor, with minimal supervision from a hypervisor. However, these extensions are geared towards same-architecture virtualisation, and as such are not immediately well-suited for cross-architecture virtualisation. This thesis presents a technique for exploiting these existing extensions, and using them in a cross-architecture virtualisation setting, improving the performance of a novel cross-architecture virtualisation hypervisor over state-of-the-art by 2:5x
    corecore