5 research outputs found

    A Review: Effort Estimation Model for Scrum Projects using Supervised Learning

    Get PDF
    Effort estimation practice in Agile is a critical component of the methodology to help cross-functional teams to plan and prioritize their work. Agile approaches have emerged in recent years as a more adaptable means of creating software projects because they consistently produce a workable end product that is developed progressively, preventing projects from failing entirely. Agile software development enables teams to collaborate directly with clients and swiftly adjust to changing requirements. This produces a result that is distinct, gradual, and targeted. It has been noted that the present Scrum estimate approach heavily relies on historical data from previous projects and expert opinion, while existing agile estimation methods like analogy and planning poker become unpredictable in the absence of historical data and experts. User Stories are used to estimate effort in the Agile approach, which has been adopted by 60–70% of the software businesses. This study's goal is to review a variety of strategies and techniques that will be used to gauge and forecast effort. Additionally, the supervised machine learning method most suited for predictive analysis is reviewed in this paper

    Optimizing complexity weight parameter of use case points estimation using particle swarm optimization

    Get PDF
    Among algorithmic-based frameworks for software development effort estimation, Use Case Points I s one of the most used. Use Case Points is a well-known estimation framework designed mainly for object-oriented projects. Use Case Points uses the use case complexity weight as its essential parameter. The parameter is calculated with the number of actors and transactions of the use case. Nevertheless, use case complexity weight is discontinuous, which can sometimes result in inaccurate measurements and abrupt classification of the use case. The objective of this work is to investigate the potential of integrating particle swarm optimization (PSO) with the Use Case Points framework. The optimizer algorithm is utilized to optimize the modified use case complexity weight parameter. We designed and conducted an experiment based on real-life data set from three software houses. The proposed model’s accuracy and performance evaluation metric is compared with other published results, which are standardized accuracy, effect size, mean balanced residual error, mean inverted balanced residual error, and mean absolute error. Moreover, the existing models as the benchmark are polynomial regression, multiple linear regression, weighted case-based reasoning with (PSO), fuzzy use case points, and standard Use Case Points. Experimental results show that the proposed model generates the best value of standardized accuracy of 99.27% and an effect size of 1.15 over the benchmark models. The results of our study are promising for researchers and practitioners because the proposed model is actually estimating, not guessing, and generating meaningful estimation with statistically and practically significant

    Optimization of use case point through the use of metaheuristic algorithm in estimating software effort

    Get PDF
    Use Case Points estimation framework relies on the complexity weight parameters to estimate software development projects. However, due to the discontinue parameters, it lead to abrupt weight classification and results in inaccurate estimation. Several research studies have addressed these weaknesses by employing various approaches, including fuzzy logic, regression analysis, and optimization techniques. Nevertheless, the utilization of optimization techniques to determine use case weight parameter values has yet to be extensively explored, with the potential to enhance accuracy further. Motivated by this, the current research delves into various metaheuristic search-based algorithms, such as genetic algorithms, Firefly algorithms, Reptile search algorithms, Particle swarm optimization, and Grey Wolf optimizers. The experimental investigation was carried out using a Silhavy UCP estimation dataset, which contains 71 project data from three software houses and is publicly available. Furthermore, we compared the performance between models based on metaheuristic algorithms. The findings indicate that the performance of the Firefly algorithm outperforms the others based on five accuracy metrics: mean absolute error, mean balance relative error, mean inverted relative error, standardized accuracy, and effect size

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    A genetic algorithm based framework for software effort prediction

    Get PDF
    Background: Several prediction models have been proposed in the literature using different techniques obtaining different results in different contexts. The need for accurate effort predictions for projects is one of the most critical and complex issues in the software industry. The automated selection and the combination of techniques in alternative ways could improve the overall accuracy of the prediction models. Objectives: In this study, we validate an automated genetic framework, and then conduct a sensitivity analysis across different genetic configurations. Following is the comparison of the framework with a baseline random guessing and an exhaustive framework. Lastly, we investigate the performance results of the best learning schemes. Methods: In total, six hundred learning schemes that include the combination of eight data preprocessors, five attribute selectors and fifteen modeling techniques represent our search space. The genetic framework, through the elitism technique, selects the best learning schemes automatically. The best learning scheme in this context means the combination of data preprocessing + attribute selection + learning algorithm with the highest coefficient correlation possible. The selected learning schemes are applied to eight datasets extracted from the ISBSG R12 Dataset. Results: The genetic framework performs as good as an exhaustive framework. The analysis of the standardized accuracy (SA) measure revealed that all best learning schemes selected by the genetic framework outperforms the baseline random guessing by 45–80%. The sensitivity analysis confirms the stability between different genetic configurations. Conclusions: The genetic framework is stable, performs better than a random guessing approach, and is as good as an exhaustive framework. Our results confirm previous ones in the field, simple regression techniques with transformations could perform as well as nonlinear techniques, and ensembles of learning machines techniques such as SMO, M5P or M5R could optimize effort predictions.Universidad de Costa Rica/[834-B5-A18]/UCR/Costa RicaUniversidad Nacional de Costa Rica/[]/UNA/Costa RicaMinisterio de Ciencia, Tecnología y Telecomunicaciones/[]/MICITT/Costa RicaUCR::Vicerrectoría de Investigación::Unidades de Investigación::Ingeniería::Centro de Investigaciones en Tecnologías de Información y Comunicación (CITIC
    corecore