7 research outputs found

    An algorithmic-based software change effort prediction model using change impact analysis for software development

    Get PDF
    Software changes are inevitable due to the dynamic nature of the software development project itself. Some software development projects practice their own customised methodology but mostly adopt two kinds of methodologies; Traditional and Agile. Traditional methodology emphasizes on detailed planning, comprehensive documentation and extensive design that resulted a low rate of changes acceptance. In contrast, Agile methodology gives high priority on accepting changes at any point of time throughout the development process as compared to the Traditional methodology. Among the primary factor that has direct impact on the effectiveness of the change acceptance decision is the accuracy of the change effort prediction. There are two current models that have been widely used to estimate change effort which are algorithmic and non-algorithmic models. The algorithmic model is known for its formal and structural way of estimation and best suited for Traditional methodology. While non-algorithmic model is widely adopted for Agile methodology of software projects due to its easiness and requiring less work in term of effort predictability. The main issue is that none of the existing change effort prediction models is proven to suits for both, Traditional and Agile methodology. Additionally, there is as yet no clear evidence of the most accurate change effort prediction model for software development phase. One of the method to overcome these challenges is the inclusion of change impact analysis in the estimation process. The aim of the research is to overcome the challenges of change effort prediction for software development phase: inconsistent states of software artifacts, repeatability using algorithmic approach and applicability for both Traditional and Agile methodologies. This research proposed an algorithmic change effort prediction model that used change impact analysis method to improve the accuracy of the effort estimation. The proposed model used a current selected change impact analysis method for software development phase which is the SDP-CIAF (Software Development Phase-Change Impact Analysis Framework). A software prototype was also developed to support the implementation of the model. The proposed model was evaluated through an extensive experimental validation using case scenarios of six real Traditional and Agile methodologies software projects. A comparative study was also conducted for further validation and verification of the proposed model. The analysis result showed an accuracy improvement of 13.44% average mean difference for change effort prediction over the current selected change effort prediction model. The evaluation results also confirmed the applicability for both Traditional and Agile methodologies

    Experience: Quality benchmarking of datasets used in software effort estimation

    Get PDF
    Data is a cornerstone of empirical software engineering (ESE) research and practice. Data underpin numerous process and project management activities, including the estimation of development effort and the prediction of the likely location and severity of defects in code. Serious questions have been raised, however, over the quality of the data used in ESE. Data quality problems caused by noise, outliers, and incompleteness have been noted as being especially prevalent. Other quality issues, although also potentially important, have received less attention. In this study, we assess the quality of 13 datasets that have been used extensively in research on software effort estimation. The quality issues considered in this article draw on a taxonomy that we published previously based on a systematic mapping of data quality issues in ESE. Our contributions are as follows: (1) an evaluation of the “fitness for purpose” of these commonly used datasets and (2) an assessment of the utility of the taxonomy in terms of dataset benchmarking. We also propose a template that could be used to both improve the ESE data collection/submission process and to evaluate other such datasets, contributing to enhanced awareness of data quality issues in the ESE community and, in time, the availability and use of higher-quality datasets

    A NEURO-FUZZY MODEL WITH SEER-SEM FOR SOFTWARE EFFORT ESTIMATION

    Get PDF
    Software effort estimation is a critical part of software engineering. Although many techniques and algorithmic models have been developed and implemented by practitioners, accurate software effort prediction is still a challenging endeavor. In order to address this issue, a novel soft computing framework was developed by previous researchers. Our study utilizes this novel framework to describe an approach combining the neuro-fuzzy technique and the System Evaluation and Estimation of Resource Software Estimation Model (SEER-SEM) effort estimation algorithm. By introducing the neuro-fuzzy technique, this proposed model utilizes positive characteristics such as learning ability, decreased sensitivity, effective generalization, and knowledge integration. Moreover, our study assesses the performance of the proposed model by designing and conducting evaluation with published project and industrial data. After analyzing the performance of our model in comparison to the SEER-SEM effort estimation algorithm alone, the proposed model demonstrates the ability of improving the estimation accuracy, especially in its ability to reduce the large Magnitude of Relative Error (MRE). Furthermore, the results of this research also indicate that the general neuro-fuzzy framework can function with various algorithmic models for improving the performance of software effort estimation

    Software development and correction estimation in the automotive domain

    Get PDF
    Während der letzten Jahrzehnte hat sich Software in alle Lebensbereiche ausgebreitet. Die kontinuierlich steigenden Kundenanforderungen ließen auch die Komplexität steigen, bei gleichbleibender Produktqualität. Analysedaten und diverse Beispiele entstammen der Automobildomäne, die einen sicherheitskritischen Bereich darstellt, in dem Produkte mit speziellen Qualitätsanforderungen entwickelt werden. Qualitätsanforderungen müssen von diversen Prozessen und Standards bedient werden, bei gleichzeitiger Einhaltung enger Endtermine. Die Komplexität der Software und der Safety-Aspekt beeinflussen die Fehlerquote der Produkte stark. Viele Anforderungen werden während der Entwicklung hinzugefügt oder verändert und führen zu permanenten Änderungen in der Software und einer weiteren Steigerung der Komplexität. Änderungen müssen analysiert und getestet werden, um die Qualität des entstehenden Produktes zu gewährleisten. Die Vorhersage von Defekten und Änderungen in der Software sind ein wichtiger Anteil des Software Engineering. Die industrielle Software-Entwicklung muss ihr Ziel innerhalb diverser Grenzen erreichen, ganz wichtig ist das Budget, wobei sich Änderungen an Projektparametern negativ auf das geplante Budget auswirken können. Solche Änderungen werden in zwei Klassen eingeteilt, durch Kunden verursachte neue oder veränderte Anforderungen, und die Korrekturen, die durch Systemverbesserungen oder Fehlerbehebungen entstehen, beide Klassen für das Projekt-Budget relevant. Die Aufwände für die neuen Kundenanforderungen können dem Budget einfach aufgeschlagen werden. Die Korrekturen verursachen ebenfalls große Aufwände, die zu einem negativen Budget führen können, was eine große Herausforderung für das Projektmanagement wie auch die automatisierte Schätzung der Aufwände über die gesamte Projektlaufzeit darstellt.Over the past decades, software has spread to most areas of our lives. The complexity increased due to steadily increasing customer demands and, at the same time, the high quality of the products had to be kept. The data for the analyses and many of the examples are taken out of the automotive software development domain. The automotive domain is a safety-critical area where products are developed with specific quality requirements. These quality requirements have to be met by many processes and by satisfying several standards within stipulated deadlines during the development lifecycle. The complexity of the software and the safety aspect have a strong influence on the product defect ratio. Many requirements will be added and adjusted during the development lifecycle leading to continuous changes in the software and increased complexity. All these changes need to be analyzed and tested to ensure the quality of the product. Predicting software defects and changes is a significant part of software engineering. Industrial software development has to achieve its target within several boundaries. One of the important boundaries for an industrial project is the budget, where changes of any project parameters can easily lead to negative effects in the planned budget. Such changes are classified into two types, the changes pushed by the customer as new requirements or changed requirements, and the correction changes in the project because of improvements of the system and identified bugs with their fixes. This classification is important to control the project budget. The effort for the realization of new customer changes can be estimated and added to the budget. The correction changes also cause huge efforts, which can lead to a negative budget in the project which is a big challenge for the project management, the automated calculation of effort estimations for the complete development life-cycle. This thesis offers a new model to improve the effort estimation from multiple perspectives. This model also integrates follow-up-defects in later process phases. Thus, the defect cost flow is part of the model and enables the management defects and follow-up defects which could spread throughout the development phases. The newly developed model was successfully evaluated in the automotive domain. The overall accuracy of the effort estimations was improved by 80%

    Procena napora i troškova za razvoj softverskih projekata pomoću veštačkih neuronskih mreža zasnovanih na Tagučijevim ortogonalnim vektorskim planovima

    Get PDF
    The modern software industry requires fast, highquality, and accurate forecasting of efforts and costs before the actual effort is invested in realizing the software product. Such requirements are a challenge for any software company, which must be ready to meet the expectations of the software customer. The main factor in the successful development of software projects and reducing the risk of errors is an adequate of the effort and costs invested during its implementation. In this doctoral dissertation, different approaches and models that have not been sufficiently precise and efficient so far will be analyzed, which resulted in only about 30% of successfully implemented software solutions. The main goal is to present three new, improved models based on efficient artificial intelligence tools, artificial neural networks. All three improved models use different architectures of artificial neural networks (ANN), constructed based on Taguchi's orthogonal vector plans. The goal is to optimize the improved models to avoid repeating the number of experiments and long time for their training. Applying the clustering method to several different sets of real projects further mitigates their heterogeneous structure. In addition, the input values of the projects are homogenized by the method of fuzzification, which achieves even greater reliability and accuracy of the obtained results. Optimization by the Taguchi method and increasing the coverage of a wide range of different projects leads to the efficient and successful completion of as many different software projects as possible. The main contributions of this paper are: constructing and identifying the best model for estimating effort and cost, selecting the best ANN architecture whose values converge the fastest to the minimum magnitude relative error, achieving a small number of experiments, reduced software effort estimation time due to convergence rate. Additional criteria and constraints are introduced to monitor and execute experiments using a precise algorithm to execute all three new proposed models. In addition to monitoring the convergence rate of each artificial neural network architecture, the influence of the input values of each model on the change in the value of the magnitude relative error of the model is also monitored. The models constructed in this way have been experimentally checked and confirmed several times on different sets of real projects and can be practically applied, and the obtained results indicate that the achieved error values are lower than those presented so far. Therefore, the proposed models in this dissertation can be reliably applied and used to assess the efforts and costs for software development and projects in other areas of industry and science.Savremena softverska industrija zahteva brzo, kvalitetno i precizno predviđanje napora i troškova, pre nego što se stvarni napor uloži u realizaciju softverskog proizvoda. Ovako postavljeni zahtevi predstavljaju izazov za bilokoju softversku kompaniju, koja mora biti spremna da ispuni postavljenaočekivanja naručioca softvera. Glavni faktor uspešnog razvoja softverskihprojekata i smanjenja rizika od grešaka je adekvatna procena uloženog naporai troškova tokom njegove realizacije. U ovoj doktorskoj disertaciji biće analizirani dosadašnji različitih pristupi i modeli koji nisu u najvećoj meri bilidovoljno precizni i efikasni, što je za posledicu imalo samo oko 30% uspešnorealizovanih softverskih rešenja.Glavni cilj biće predstavljanje tri nova poboljšana modela zasnovana na efikasnim alatima veštačke inteligencije, veštačkim neuronskim mrežama (engl. Artificial neural networks/ANN). Sva tri poboljšana modela koristerazličite arhitekture veštačkih neuronskih mreža, konstruisanih na osnovuTagučijevih ortogonalnih vektorskih planova. Cilj je optimizacija poboljšanihmodela kako bi se izbeglo ponavljanje broja eksperimenata i dugotrajno vremeza njihovo obučavanje, odnosno treniranje. Primenom metode klasterizacije naviše različitih skupova realnih projekata dodatno se ublažava njihova heterogena struktura. Dodatno, ulazne vrednosti projekata se homogenizuju metodom fazifikacije čime se postiže još veća pouzdanost i tačnost dobijenih  rezultata. Optimizacija Tagučijevom metodom uz povećanje pokrivenosti širokog spektra različitih projekata, dovodi do efikasnog i uspešnog dovršavanja što više različitih softverskih projekta. Glavni doprinosi ove disertacije su: konstruisanje i identifikovanje najboljeg modela za procenu napora i troškova, odabir najbolje ANN arhitekture čije vrednosti najbrže konvergiraju minimalnoj magnitudnoj relativnoj greški, postizanje malog broja izvedenih eksperimenata, smanjeno vreme procene softverskog napora zbog  stope konvergencije. Uvode se i dodatni kriterijumi i ograničenja za nadgledanje i izvršavanje eksperimenata pomoću preciznog algoritma za izvršavanje nad sva tri nova predložena modela. Pored praćenja brzine konvergencije svake arhitekture veštačke neuronske mreže, prati se i uticaj ulaznih veličina svakog modela na promenu vrednosti magnitudne relativne greške modela. Na ovakav način konstruisani modeli eksperimentalno su više puta proveravani i potvrđeni na različitim skupovima realnih projekata i mogu se praktično primenjivati, a dobijeni rezultati ukazuju da su postignute vrednosti  grešaka niže od dosadašnjih predstavljenih. Samim tim se predloženi modeli u ovoj disertaciji mogu pouzdano primenjivati i koristiti ne samo za procenu  napora i troškova za razvoj softverskih već i za razvoj projekata u drugim oblastima industrije i nauk
    corecore