4 research outputs found

    Test case generation optimization using combination prediction model and prioritization identification for complex system retesting

    Get PDF
    Nowadays, the retesting process has become crucial in assessing the functionality and correctness of a system in order to ensure high reliability. Although many techniques and approaches have been introduced by researchers, some issues still need addressing to ensure test case adequacy. To determine test case adequacy, it is crucial to first determine the test set size in terms of number of test cases to prevent the system from failing to execute. It is also crucial to identify the requirement specification factor that would solve the problem of insufficiency and scenario redundancy. To overcome this drawback, this study proposed an approach for test case generation in the retesting process by combining two models, which would reveal more severe faults and improve software quality. The first model was enhanced through determining the test case set size by constructing a predictive model based on failure rate using seed fault validation. This model was then extended to requirement prioritisation. Next, it was used to schedule the test cases that focus on Prioritisation Factor Value of requirement specifications. The Test Point Analysis was used to evaluate test effort by measuring level of estimation complexity and by considering the relationship among test cases, fault response time, and fault resolution time. This approach was then evaluated using complex system that called as Plantation Management System as a project case study. Data of Payroll and Labour Management module that applied in 138 estates been collected for this study. As a result, the test case generation approach was able to measure test effort with High accuracy based on two combination model and it achieved a complexity level with 90% confidence bounds of Relative Error. This result proves that this approach can forecast test effort rank based on complexity level of requirement, which can be extracted from early on in the testing phase

    Hybrid and dynamic static criteria models for test case prioritization of web application regression testing

    Get PDF
    In software testing domain, different techniques and approaches are used to support the process of regression testing in an effective way. The main approaches include test case minimization, test case selection, and test case prioritization. Test case prioritization techniques improve the performance of regression testing by arranging test cases in such a way that maximize fault detection could be achieved in a shorter time. However, the problems for web testing are the timing for executing test cases and the number of fault detected. The aim of this study is to increase the effectiveness of test case prioritization by proposing an approach that could detect faults earlier at a shorter execution time. This research proposed an approach comprising two models: Hybrid Static Criteria Model (HSCM) and Dynamic Weighting Static Criteria Model (DWSCM). Each model applied three criteria: most common HTTP requests in pages, length of HTTP request chains, and dependency of HTTP requests. These criteria are used to prioritize test cases for web application regression testing. The proposed HSCM utilized clustering technique to group test cases. A hybridized technique was proposed to prioritize test cases by relying on assigned test case priorities from the combination of aforementioned criteria. A dynamic weighting scheme of criteria for prioritizing test cases was used to increase fault detection rate. The findings revealed that, the models comprising enhanced of Average Percentage Fault Detection (APFD), yielded the highest APFD of 98% in DWSCM and 87% in HSCM, which have led to improve effectiveness prioritization models. The findings confirmed the ability of the proposed techniques in improving web application regression testing

    An improved method for test case prioritization by incorporating historical test case data

    Get PDF
    AbstractTest case prioritization reorders test cases from a previous version of a software system for the current release to optimize regression testing. We have previously introduced a technique for test case prioritization using historical test case performance data. The technique was based on a test case prioritization equation, which directly computes the priority of each test case using the historical information of the test case using an equation with constant coefficients. This technique was compared just with random ordering approach. In this paper, we present an enhancement of the aforementioned technique in two ways. First, we propose a new prioritization equation with variable coefficients gained according to the available historical performance data, which acts as a feedback from the previous test sessions. Second, a family of comprehensive empirical studies has been conducted to evaluate the performance of the technique. We have compared the proposed technique with our previous technique and the technique proposed by Kim and Porter. The experimental results demonstrate the effectiveness of the proposed technique in accelerating the rate of fault detection in history-based test case prioritization
    corecore