4 research outputs found

    A survey on test suite reduction frameworks and tools

    No full text
    Software testing is a widely accepted practice that ensures the quality of a System under Test (SUT). However, the gradual increase of the test suite size demands high portion of testing budget and time. Test Suite Reduction (TSR) is considered a potential approach to deal with the test suite size problem. Moreover, a complete automation support is highly recommended for software testing to adequately meet the challenges of a resource constrained testing environment. Several TSR frameworks and tools have been proposed to efficiently address the test-suite size problem. The main objective of the paper is to comprehensively review the state-of-the-art TSR frameworks to highlights their strengths and weaknesses. Furthermore, the paper focuses on devising a detailed thematic taxonomy to classify existing literature that helps in understanding the underlying issues and proof of concept. Moreover, the paper investigates critical aspects and related features of TSR frameworks and tools based on a set of defined parameters. We also rigorously elaborated various testing domains and approaches followed by the extant TSR frameworks. The results reveal that majority of TSR frameworks focused on randomized unit testing, and a considerable number of frameworks lacks in supporting multi-objective optimization problems. Moreover, there is no generalized framework, effective for testing applications developed in any programming domain. Conversely, Integer Linear Programming (ILP) based TSR frameworks provide an optimal solution for multi-objective optimization problems and improve execution time by running multiple ILP in parallel. The study concludes with new insights and provides an unbiased view of the state-of-the-art TSR frameworks. Finally, we present potential research issues for further investigation to anticipate efficient TSR frameworks

    A model to compare cloud and non-cloud storage of Big Data

    Get PDF
    When comparing Cloud and non-Cloud Storage it can be difficult to ensure that the comparison is fair. In this paper we examine the process of setting up such a comparison and the metric used. Performance comparisons on Cloud and non Cloud systems, deployed for biomedical scientists, have been conducted to identify improvements of efficiency and performance. Prior to the experiments, network latency, file size and job failures were identified as factors which degrade performance and experiments were conducted to understand their impacts. Organizational Sustainability Modeling (OSM) is used before, during and after the experiments to ensure fair comparisons are achieved. OSM defines the actual and expected execution time, risk control rates and is used to understand key outputs related to both Cloud and non-Cloud experiments. Forty experiments on both Cloud and non Cloud systems were undertaken with two case studies. The first case study was focused on transferring and backing up 10,000 files of 1 GB each and the second case study was focused on transferring and backing up 1,000 files 10 GB each. Results showed that first, the actual and expected execution time on the Cloud was lower than on the non-Cloud system. Second, there was more than 99% consistency between the actual and expected execution time on the Cloud while no comparable consistency was found on the non-Cloud system. Third, the improvement in efficiency was higher on the Cloud than the non-Cloud. OSM is the metric used to analyze the collected data and provided synthesis and insights to the data analysis and visualization of the two case studies
    corecore