8 research outputs found

    History Based Multi Objective Test Suite Prioritization in Regression Testing Using Genetic Algorithm

    Get PDF
    Regression testing is the most essential and expensive testing activity which occurs throughout the software development life cycle. As Regression testing requires executions of many test cases it imposes the necessity of test case prioritization process to reduce the resource constraint. Test case prioritization technique schedule the test case in an order that increase the chance of early fault detection. In this paper we propose a genetic algorithm based prioritization technique which uses the historical information of system level test cases to prioritize test cases to detect most severe faults early. In addition the proposed approach also calculates weight factor for each requirement to achieve customer satisfaction and to improve the rate of severe fault detection. To validate the proposed approach we performed controlled experiments over industry projects which proved the proposed approach effectiveness in terms of average percentage of fault detected

    CORFOOS: Cost Reduction Framework for Object Oriented System

    Get PDF
    There are many constraints in developing software for an organization, such as time and budget. Due to these constraints and intricacies of advanced software development technology, it has become very challenging to complete such projects. To make these projects cost-effective, this paper presents a cost reduction framework (CORFOOS) which works at three levels. At the first level, Intermediate Requirement Dependency Value (IRDV) of each requirement is determined by creating the intermediate requirements dependency graph (IRDG). At the second level, the requirements are categorised and finally at the third level,the testing parameters are determined by analyzing the requirements. To analyze therequirements, the dependency model, interaction model, language specification model and fault model are used

    Test case generation optimization using combination prediction model and prioritization identification for complex system retesting

    Get PDF
    Nowadays, the retesting process has become crucial in assessing the functionality and correctness of a system in order to ensure high reliability. Although many techniques and approaches have been introduced by researchers, some issues still need addressing to ensure test case adequacy. To determine test case adequacy, it is crucial to first determine the test set size in terms of number of test cases to prevent the system from failing to execute. It is also crucial to identify the requirement specification factor that would solve the problem of insufficiency and scenario redundancy. To overcome this drawback, this study proposed an approach for test case generation in the retesting process by combining two models, which would reveal more severe faults and improve software quality. The first model was enhanced through determining the test case set size by constructing a predictive model based on failure rate using seed fault validation. This model was then extended to requirement prioritisation. Next, it was used to schedule the test cases that focus on Prioritisation Factor Value of requirement specifications. The Test Point Analysis was used to evaluate test effort by measuring level of estimation complexity and by considering the relationship among test cases, fault response time, and fault resolution time. This approach was then evaluated using complex system that called as Plantation Management System as a project case study. Data of Payroll and Labour Management module that applied in 138 estates been collected for this study. As a result, the test case generation approach was able to measure test effort with High accuracy based on two combination model and it achieved a complexity level with 90% confidence bounds of Relative Error. This result proves that this approach can forecast test effort rank based on complexity level of requirement, which can be extracted from early on in the testing phase

    Hybrid and dynamic static criteria models for test case prioritization of web application regression testing

    Get PDF
    In software testing domain, different techniques and approaches are used to support the process of regression testing in an effective way. The main approaches include test case minimization, test case selection, and test case prioritization. Test case prioritization techniques improve the performance of regression testing by arranging test cases in such a way that maximize fault detection could be achieved in a shorter time. However, the problems for web testing are the timing for executing test cases and the number of fault detected. The aim of this study is to increase the effectiveness of test case prioritization by proposing an approach that could detect faults earlier at a shorter execution time. This research proposed an approach comprising two models: Hybrid Static Criteria Model (HSCM) and Dynamic Weighting Static Criteria Model (DWSCM). Each model applied three criteria: most common HTTP requests in pages, length of HTTP request chains, and dependency of HTTP requests. These criteria are used to prioritize test cases for web application regression testing. The proposed HSCM utilized clustering technique to group test cases. A hybridized technique was proposed to prioritize test cases by relying on assigned test case priorities from the combination of aforementioned criteria. A dynamic weighting scheme of criteria for prioritizing test cases was used to increase fault detection rate. The findings revealed that, the models comprising enhanced of Average Percentage Fault Detection (APFD), yielded the highest APFD of 98% in DWSCM and 87% in HSCM, which have led to improve effectiveness prioritization models. The findings confirmed the ability of the proposed techniques in improving web application regression testing

    Test case prioritization technique based on string distance metrics

    Get PDF
    Numerous test case prioritization (TCP) approaches have been introduced to enhance the test viability in software testing activity with the goal to maximize early average percentage fault detection (APFD). There are different approaches and the process for each approach varies. Furthermore, these approaches are not well documented within the single TCP approach. Based on current studies, having an approach that has high coverage effectiveness (CE) and APFD rate, remains a challenge in TCP. The string-based approach is known to have a single string distance based metric to differentiate test cases that can improve the CE results. However, to differentiate precisely the test cases, the string distances require enhancement. Therefore, a TCP technique based on string distance metric was developed to improve CE and APFD rate. In this research, to differentiate precisely the test cases and counter the string distances problem, an enhanced string distances based metric with a string weight based metric was introduced. Then, the metric was executed under designed process for string-based approach for complete evaluation. Experimental results showed that the enhanced string metric had the highest APFD with 98.56% and highest CE with 69.82% in Siemen dataset, cstcas. Besides, the technique yielded the highest APFD with 76.38% in Robotic Wheelchair System (RWS) case study. As a conclusion, the enhanced TCP technique with weight based metric has prioritised the test case based on their occurrences which helped to differentiate precisely the test cases, and improved the overall scores of APFD and CE

    Dispersity-Based Test Case Prioritization

    Get PDF
    With real-world projects, existing test case prioritization (TCP) techniques have limitations when applied to them, because these techniques require certain information to be made available before they can be applied. For example, the family of input-based TCP techniques are based on test case values or test script strings; other techniques use test coverage, test history, program structure, or requirements information. Existing techniques also cannot guarantee to always be more effective than random prioritization (RP) that does not have any precondition. As a result, RP remains the most applicable and most fundamental TCP technique. In this thesis, we propose a new TCP technique, and mainly aim at studying the Effectiveness, Actual execution time for failure detection, Efficiency and Applicability of the new approach
    corecore