120,173 research outputs found

    Defect prediction model for testing phase

    Get PDF
    The need for predicting defects in testing phase is important nowadays as part of the improvement initiatives for software production process. Being the group that ensuring successful implementation of verification and validation process area, all test engineers in Test Centre of Excellence (Test COE) department are required to play their part to discover software defects as many as possible and contain them within testing phase. This research is aimed to achieve zero-known post release defects of the software delivered to end-user. To achieve the target, the research effort focuses on establishing a defect prediction model for testing phase using Six Sigma methodology. It identifies the customer needs on the requirement for the prediction model as well as how the model can benefits them. It also outlines the possible factors that associated to defect discovery in testing phase. Analysis of the repeatability and capability of test engineers in finding defects are elaborated. This research also describes the process of identifying type of data to be collected and techniques of obtaining them. Relationship of customer needs with the technical requirements is then explained clearly. Finally, the proposed defect prediction model for testing phase is demonstrated via regression analysis. This is achieved by considering faults found in phases prior to testing phase and also the code size of the software. The achievement of the whole research effort is described at the end of this project together with challenges faced and recommendation for next research work

    Call Graph Based Metrics to Evaluate Software Design Quality

    Get PDF
    Software defects prediction was introduced to support development and maintenance activities such as improving the software quality through finding errors or patterns of errors early in the software development process. Software defects prediction is playing the role of maintenance facilitation in terms of effort, time and more importantly the cost prediction for software maintenance and evolution activities. In this research, software call graph model is used to evaluate its ability to predict quality related attributes in developed software products. As a case study, the call graph model is generated for several applications in order to represent and reflect the degree of their complexity, especially in terms of understandability, testability and maintenance efforts. This call graph model is then used to collect some software product attributes, and formulate several call graph based metrics. The extracted metrics are investigated in relation or correlation with bugs collected from customers-bug reports for the evaluated applications. Those software related bugs are compiled into dataset files to be used as an input to a data miner for classification, prediction and association analysis. Finally, the results of the analysis are evaluated in terms of finding the correlation between call graph based metrics and software products\u27 bugs. In this research, we assert that call graph based metrics are appropriate to be used to detect and predict software defects so the activities of maintenance and testing stages after the delivery become easier to estimate or assess

    A Bibliometric Survey on the Reliable Software Delivery Using Predictive Analysis

    Get PDF
    Delivering a reliable software product is a fairly complex process, which involves proper coordination from the various teams in planning, execution, and testing for delivering software. Most of the development time and the software budget\u27s cost is getting spent finding and fixing bugs. Rework and side effect costs are mostly not visible in the planned estimates, caused by inherent bugs in the modified code, which impact the software delivery timeline and increase the cost. Artificial intelligence advancements can predict the probable defects with classification based on the software code changes, helping the software development team make rational decisions. Optimizing the software cost and improving the software quality is the topmost priority of the industry to remain profitable in the competitive market. Hence, there is a great urge to improve software delivery quality by minimizing defects and having reasonable control over predicted defects. This paper presents the bibliometric study for Reliable Software Delivery using Predictive analysis by selecting 450 documents from the Scopus database, choosing keywords like software defect prediction, machine learning, and artificial intelligence. The study is conducted for a year starting from 2010 to 2021. As per the survey, it is observed that Software defect prediction achieved an excellent focus among the researchers. There are great possibilities to predict and improve overall software product quality using artificial intelligence techniques

    Class Imbalance Reduction and Centroid based Relevant Project Selection for Cross Project Defect Prediction

    Get PDF
    Cross-Project Defect Prediction (CPDP) is the process of predicting defects in a target project using information from other projects. This can assist developers in prioritizing their testing efforts and finding flaws. Transfer Learning (TL) has been frequently used at CPDP to improve prediction performance by reducing the disparity in data distribution between the source and target projects. Software Defect Prediction (SDP) is a common study topic in software engineering that plays a critical role in software quality assurance. To address the cross-project class imbalance problem, Centroid-based PF-SMOTE for Imbalanced data is used. In this paper, we used a Centroid-based PF-SMOTE to balance the datasets and Centroid based relevant data selection for Cross Project Defect Prediction. These methods use the mean of all attributes in a dataset and calculating the difference between mean of all datasets. For experimentation, the open source software defect datasets namely, AEEM, Re-Link, and NASA, are considered

    Adopting genetic algorithm to enhance state-sensitivity partitioning

    Get PDF
    Software testing requires executing software under test with the intention of finding defects as much as possible. Test case generation remains the most dominant research in software testing.The technique used in generating test cases may lead to effective and efficient software testing process.Many techniques have been proposed to generate test cases.One of them is State Sensitivity Partitioning (SSP) technique.The objective of SSP is to avoid exhaustive testing of the entire data states of a module. In SSP,test cases are represented in the form of sequence of events. Even recognizing the finite limits on the size of the queue, there is an infinite set of these sequences and with no upper bound on the length of such a sequence.Thus, a lengthy test sequence might consist of redundant data states. The existence of the redundant data state will increase the size of test suite and consequently the process of testing will be ineffective. Therefore, there is a need to optimize those test cases generated by the SSP in enhancing its effectiveness in detecting faults. Genetic algorithm (GA) has been identified as the most common potential technique among several optimization techniques.Thus, GA is investigated for the integrating with the existing SSP. This paper addresses the issue on how to represent the states produced by SSP sequences of events in order to be accepted by GA.System ID were used for representing the combination of states variables uniquely and generate the GA initial population

    Rigorously assessing software reliability and safety

    Get PDF
    This paper summarises the state of the art in the assessment of software reliability and safety ("dependability"), and describes some promising developments. A sound demonstration of very high dependability is still impossible before operation of the software; but research is finding ways to make rigorous assessment increasingly feasible. While refined mathematical techniques cannot take the place of factual knowledge, they can allow the decision-maker to draw more accurate conclusions from the knowledge that is available
    corecore