9 research outputs found

    Application of ensemble techniques in predicting object-oriented software maintainability

    Get PDF
    While prior object-oriented software maintainability literature acknowledges the role of machine learning techniques as valuable predictors of potential change, the most suitable technique that achieves consistently high accuracy remains undetermined. With the objective of obtaining more consistent results, an ensemble technique is investigated to advance the performance of the individual models and increase their accuracy in predicting software maintainability of the object-oriented system. This paper describes the research plan for predicting object-oriented software maintainability using ensemble techniques. First, we present a brief overview of the main research background and its different components. Second, we explain the research methodology. Third, we provide expected results. Finally, we conclude summary of the current status

    Stability prediction of the software requirements specification

    Full text link
    Complex decision-making is a prominent aspect of Requirements Engineering. This work presents the Bayesian network Requisites that predicts whether the requirements specification documents have to be revised. We show how to validate Requisites by means of metrics obtained from a large complex software project. Besides, this Bayesian network has been integrated into a software tool by defining a communication interface inside a multilayer architecture to add this a new decision making functionality. It provides requirements engineers a way of exploring the software requirement specification by combining requirement metrics and the probability values estimated by the Bayesian network

    Discrimination Analysis for Predicting Defect-Prone Software Modules

    Get PDF
    Software defect prediction studies usually build models without analyzing the data used in the procedure. As a result, the same approach has different performances on different data sets. In this paper, we introduce discrimination analysis for providing a good method to give insight into the inherent property of the software data. Based on the analysis, we find that the data sets used in this field have nonlinearly separable and class-imbalanced problems. Unlike the prior works, we try to exploit the kernel method to nonlinearly map the data into a high-dimensional feature space. By combating these two problems, we propose an algorithm based on kernel discrimination analysis called KDC to build more effective prediction model. Experimental results on the data sets from different organizations indicate that KDC is more accurate in terms of F-measure than the state-of-the-art methods. We are optimistic that our discrimination analysis method can guide more studies on data structure, which may derive useful knowledge from data science for building more accurate prediction models

    The Effect of Applying Design of Experiments Techniques to Software Performance Testing

    Get PDF
    Effective software performance testing is essential to the development and delivery of quality software products. Many software testing investigations have reported software performance testing improvements, but few have quantitatively validated measurable software testing performance improvements across an aggregate of studies. This study addressed that gap by conducting a meta-analysis to assess the relationship between applying Design of Experiments (DOE) techniques in the software testing process and the reported software performance testing improvements. Software performance testing theories and DOE techniques composed the theoretical framework for this study. Software testing studies (n = 96) were analyzed, where half had DOE techniques applied and the other half did not. Five research hypotheses were tested, where findings were measured in (a) the number of detected defects, (b) the rate of defect detection, (c) the phase in which the defect was detected, (d) the total number of hours it took to complete the testing, and (e) an overall hypothesis which included all measurements for all findings. The data were analyzed by first computing standard difference in means effect sizes, then through the Z test, the Q test, and the t test in statistical comparisons. Results of the meta-analysis showed that applying DOE techniques in the software testing process improved software performance testing (p \u3c 05). These results have social implications for the software testing industry and software testing professionals, providing another empirically-validated testing methodology. Software organizations can use this methodology to differentiate their software testing process, to create more quality products, and to benefit the consumer and society in general

    Software defect prediction using maximal information coefficient and fast correlation-based filter feature selection

    Get PDF
    Software quality ensures that applications that are developed are failure free. Some modern systems are intricate, due to the complexity of their information processes. Software fault prediction is an important quality assurance activity, since it is a mechanism that correctly predicts the defect proneness of modules and classifies modules that saves resources, time and developers’ efforts. In this study, a model that selects relevant features that can be used in defect prediction was proposed. The literature was reviewed and it revealed that process metrics are better predictors of defects in version systems and are based on historic source code over time. These metrics are extracted from the source-code module and include, for example, the number of additions and deletions from the source code, the number of distinct committers and the number of modified lines. In this research, defect prediction was conducted using open source software (OSS) of software product line(s) (SPL), hence process metrics were chosen. Data sets that are used in defect prediction may contain non-significant and redundant attributes that may affect the accuracy of machine-learning algorithms. In order to improve the prediction accuracy of classification models, features that are significant in the defect prediction process are utilised. In machine learning, feature selection techniques are applied in the identification of the relevant data. Feature selection is a pre-processing step that helps to reduce the dimensionality of data in machine learning. Feature selection techniques include information theoretic methods that are based on the entropy concept. This study experimented the efficiency of the feature selection techniques. It was realised that software defect prediction using significant attributes improves the prediction accuracy. A novel MICFastCR model, which is based on the Maximal Information Coefficient (MIC) was developed to select significant attributes and Fast Correlation Based Filter (FCBF) to eliminate redundant attributes. Machine learning algorithms were then run to predict software defects. The MICFastCR achieved the highest prediction accuracy as reported by various performance measures.School of ComputingPh. D. (Computer Science
    corecore