2,586 research outputs found

    Time-Space Efficient Regression Testing for Configurable Systems

    Full text link
    Configurable systems are those that can be adapted from a set of options. They are prevalent and testing them is important and challenging. Existing approaches for testing configurable systems are either unsound (i.e., they can miss fault-revealing configurations) or do not scale. This paper proposes EvoSPLat, a regression testing technique for configurable systems. EvoSPLat builds on our previously-developed technique, SPLat, which explores all dynamically reachable configurations from a test. EvoSPLat is tuned for two scenarios of use in regression testing: Regression Configuration Selection (RCS) and Regression Test Selection (RTS). EvoSPLat for RCS prunes configurations (not tests) that are not impacted by changes whereas EvoSPLat for RTS prunes tests (not configurations) which are not impacted by changes. Handling both scenarios in the context of evolution is important. Experimental results show that EvoSPLat is promising. We observed a substantial reduction in time (22%) and in the number of configurations (45%) for configurable Java programs. In a case study on a large real-world configurable system (GCC), EvoSPLat reduced 35% of the running time. Comparing EvoSPLat with sampling techniques, 2-wise was the most efficient technique, but it missed two bugs whereas EvoSPLat detected all bugs four times faster than 6-wise, on average.Comment: 14 page

    Input Modeling Prioritization Using Statistically User Profile for Pairwise Test Case Generation with Constraints Handling

    Get PDF
    Pairwise testing is a widely used technique for software testing with reduce size of the test suite and able to detect interactions that trigger the system’s faults. In addition, pairwise testing test suites must be able to deal with constraints between input parameters and values. In current practice, selecting input parameters and values usually depends on tester skills that might not be sufficient. Input parameters and values modeling and tools for easily guiding and prioritizing the selection of optimal input parameters and values for the SUT is also required. In this work, we present an approach for prioritizing input parameters and values modeling using statistical user profile. Our approach is implemented in a tool called UPPTCT which provides ability to handle constraints on input parameters and values for pairwise testing in order to generate test cases. We conduct experiments to evaluate test case effectiveness and compare our tool with other renowned pairwise test generation and constraints handling tools. The experimental results show that the effectiveness of our approach is significantly more efficient and effective than random testing as large portion of reported defects with regard to statically user profile were caught by our approach. Furthermore, our tool performs better in some cases and performs comparable results for generating test cases upon input parameters and values for both with constraints handling and without constraints handling.Pairwise testing is a widely used technique for software testing with the reduced size of the test suite and able to detect interactions that trigger the system’s faults. In addition, pairwise testing test suites must be able to take constraints between input parameters and parameter values into account. In current practice, identifying and selecting input parameters and parameter values usually depends on tester skills that might not be sufficient. Input parameters and parameter values modeling and tools for easily guiding and prioritizing the selection of optimal input parameters and parameter values for the SUT is also required. In this work, we present an approach for prioritizing input parameters and parameter values modeling using statistical user profile. Our approach is implemented in a tool called UPPTCT which provides the ability to handle constraints on input parameters and parameter values for pairwise testing in order to generate test cases. We conduct experiments to evaluate test case effectiveness and compare our tool with other renowned pairwise test generation and constraints handling tools. The experimental results show that the effectiveness of our approach is significantly more efficient and effective than random testing as a large portion of reported defects with regard to statically user profile were caught by our approach. Furthermore, our tool performs better in some cases and performs comparable results for generating test cases upon input parameters and parameter values for both with constraints handling and without constraints handling

    Selection of heterogeneous test environments for the execution of automated tests

    Get PDF
    As software complexity grows so does the size of automated test suites that enable us to validate the expected behavior of the system under test. When that occurs, problems emerge for developers in the form of increased effort to manage the test process and longer execution time of test suites. Manual managing automated tests is especially problematic, as the recurring costa of guaranteeing that the automated tests (e.g.: thousands) are correctly configured to execute on the available test environments (e.g.: dozens or hundreds), on a regular basis and during the products lifetime may become huge, with unbearable human effort involved. This problem increases substantially when the system under test is one highly configurable product, requiring to be validated in heterogeneous environments, especially when these target test environments also evolve frequently (e.g.: new operating systems, new browsers, new mobile devices, ...). Being an integral part of software development, testing needs to evolve and break free from the conventional methods. This dissertation presents a technique that extends one existent algorithm to reduce the number of test executions, and extend it, enabling to perform the test case distribution over multiples heterogeneous test environments. The development, implementation and validation of the technique presented in this dissertation were conducted in the industrial context of an international software house. Real development scenarios were used to conduct experiments and validations, and the results demonstrated that the proposed technique is effective in terms of eliminating the human effort involved in test distribution.À medida que a complexidade do software aumenta o mesmo acontece com a dimensão das suites de testes automizados que permitem validar o comportamento esperado do sistema que está a ser testado. Quando isso ocorre, aparecem problemas para os programadores sob a forma de aumento de esforço necessário para gerir o processo de teste e maior tempo de execução das suites de teste. Gerir manualmente milhares de testes automatizados é especialmente problemático uma vez que os custos recorrentes de garantir que os testes automatizados (ex: milhares) estão corretamente configurados para executar nos ambientes de testes disponíveis (ex: dezenas ou centennas), durante o tempo de vida dos produtos pode tornar-se gigantesco. Este problema aumenta substancialmente quando o sistema que está a ser testado é um produto altamente configurável, precisando de ser validado em ambientes heterogéneos, especialmente quando também estes ambientes destino de testes também evoluem frequentemente (ex: novos sistemas operativos, novos browsers, novos devices móveis, ...). O tempo de execução destas suites de testes torna-se também um problema enorme, dado que não é viável executar todos as suites de testes em todas as configurações possiveis. Sendo uma parte integral do desenvolvimento de software, a forma de testar precisa de evoluir e libertar-se dos métodos convencionais. Esta dissertação apresenta uma técnica que estende um algoritmo existente que permite reduzir o número de execuções de testes, e estende-o, permitindo fazer a distribuição de casos de teste sobre múltiplos ambientes de teste heterogéneos. O desenvolvimento, implementação e validação da técnica proposta na presente dissertação foram conduzidos no contexto industrial de uma empresa internacional de desenvolvimento de software. Foram utilizados cenários de desenvolvimento de software reais para conduzir experiências e validações, e os resultados demonstraram que a técnica proposta é eficaz em termos de eliminar o esforo humano envolvido na distribuição de testes

    Tools and Techniques Used for Prioritizing Test Cases in Regression Testing

    Get PDF
    Testing is a very expensive task in term of cost, effort and time and it is necessary step of software development because without testing software cannot be completed. Regression testing is a type of software testing which is widely used in software development and maintenance phase; it also occupies a large portion of the software maintenance budget. There are many software testing tools and Technique that is used to test the software program. This research paper defines some testing approach which reduced person effort and time in regression testing. Software systems are change regularly during development and maintenance face. After software is modified regression testing is applied to software to ensure that it behaves intended and modifications not negatively impacts its original functionality. This paper focus on improving the performance of regression testing by using these approach for regression testing by computing coverage data for evolving software using dataflow analysis and execution tracing

    Designing A General Deep Web Access Approach Based On A Newly Introduced Factor; Harvestability Factor (HF)

    Get PDF
    The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester which can be applied to different settings and situations. To develop such a harvester, a number of issues should be considered. Among these issues, business domain features, targeted websites' features, and the harvesting goals are the most influential ones. To consider all these elements in one big picture, a new concept, called harvestability factor (HF), is introduced in this paper. The HF is defined as an attribute of a website (HF_w) or a harvester (HF_h) representing the extent to which the website can be harvested or the harvester can harvest. The comprising elements of these factors are different websites' (for HF_w) or harvesters' (for HF_h) features. These features are presented in this paper by gathering a number of them from literature and introducing new ones through the authors' experiments. In addition to enabling websites' or harvesters' designers of evaluating where they products stand from the harvesting perspective, the HF can act as a framework for designing general purpose deep web harvesters. This framework allows filling in the gap in designing general purpose harvesters by focusing on detailed features of deep websites which have effects on harvesting processes. The represented features in this paper provide a thorough list of requirements for designing deep web harvesters which is not done to best of our knowledge in literature in this extent. To validate the effectiveness of HF in practice, it is shown how the HFs' elements can be applied in categorizing deep websites and how this is useful in designing a harvester. To run the experiments, the developed harvester by the authors, is also discussed in this paper

    Polyglot Programming in Applications Used for Genetic Data Analysis

    Get PDF

    A Study of Efficiency Improvement in Test Automation for Electronic Invoicing Software Solutions

    Get PDF
    Testing is the process of experimenting or evaluating a system, either manually or automatically, to verify that it meets specified requirements or to identify differences between expected and observed results. Software testing, on the other hand, covers the dynamically performed verification activities to meet the expected behavior of software from an infinite number of work areas, with a limited number of appropriately selected tests. Test Automation is an important step of Continuous Deployment. Automation reduces bug fix costs by testing early and testing often principles. Automation projects also increase quality and reliability. But also the test Automation should be fast to keep track of the test. In this project, I aim to improve the testing efficiency and methodology. During my internship my supervisor and my team leader expected me to make some improvements on tests, try to find some ways to make the tests faster, or find what might be making the test slower and how to discard these parts.Testing is the process of experimenting or evaluating a system, either manually or automatically, to verify that it meets specified requirements or to identify differences between expected and observed results. Software testing, on the other hand, covers the dynamically performed verification activities to meet the expected behavior of software from an infinite number of work areas, with a limited number of appropriately selected tests. Test Automation is an important step of Continuous Deployment. Automation reduces bug fix costs by testing early and testing often principles. Automation projects also increase quality and reliability. But also the test Automation should be fast to keep track of the test. In this project, I aim to improve the testing efficiency and methodology. During my internship my supervisor and my team leader expected me to make some improvements on tests, try to find some ways to make the tests faster, or find what might be making the test slower and how to discard these parts
    • …
    corecore