250,590 research outputs found

    Optimizing complexity weight parameter of use case points estimation using particle swarm optimization

    Get PDF
    Among algorithmic-based frameworks for software development effort estimation, Use Case Points I s one of the most used. Use Case Points is a well-known estimation framework designed mainly for object-oriented projects. Use Case Points uses the use case complexity weight as its essential parameter. The parameter is calculated with the number of actors and transactions of the use case. Nevertheless, use case complexity weight is discontinuous, which can sometimes result in inaccurate measurements and abrupt classification of the use case. The objective of this work is to investigate the potential of integrating particle swarm optimization (PSO) with the Use Case Points framework. The optimizer algorithm is utilized to optimize the modified use case complexity weight parameter. We designed and conducted an experiment based on real-life data set from three software houses. The proposed model’s accuracy and performance evaluation metric is compared with other published results, which are standardized accuracy, effect size, mean balanced residual error, mean inverted balanced residual error, and mean absolute error. Moreover, the existing models as the benchmark are polynomial regression, multiple linear regression, weighted case-based reasoning with (PSO), fuzzy use case points, and standard Use Case Points. Experimental results show that the proposed model generates the best value of standardized accuracy of 99.27% and an effect size of 1.15 over the benchmark models. The results of our study are promising for researchers and practitioners because the proposed model is actually estimating, not guessing, and generating meaningful estimation with statistically and practically significant

    Estimering av Webutviklingsprosjekter : Use case poeng sammenliknet med eksperters estimater, funksjonspoeng, COCOMO II og WEBMO

    Get PDF
    This document is my master thesis at the Institute of Informatics, University of Oslo. This is a multiple case study where the focus is estimating web based software projects. For consultants in the software industry, the bidding process is one of the most important processes. At this point the costumer chooses which supplier they want to develop the planned software. Two important parts of the suppliers offer are price and time. The supplier has to make an estimate as good and realistic as possible to become the one that is chosen. When the costumer is going to choose its supplier, the estimates are studied in detail and the objective of the customer is to receive the best system as fast and cheap as possible. This may lead to incorrect estimates both when time and money are concerned because of the competition between suppliers. It is useful to study a model of estimation, to help the customer understand the price and development time of the planned software. It is also useful to discuss which estimation approach that a costumer understands. Estimates are important when starting a software project. A costumer needs to know time and costs to see if they can afford the software. For suppliers estimates are important to calculate income and how long that project will take. This way they will also know when the developers will be ready to start new projects. Estimates are often based on the experiences of the experts. One challenge with these estimates are that they have to be done early, even before the suppliers know the details about the system. It is therefore important to know how to estimate when only the functionality described in the user specification is known. It is useful to study how the user specification, specially the functional requirements, can be used when estimating. In May 2003 the Software Engineering group at Simula Research Laboratory began a research on the development of web based software. Four suppliers developed a research database, called “The Simula Database of Empirical Studies”, simultaneously as the researchers was doing a research on the development. This thesis looks at the estimation process in the development. The main focus is to study how an estimation approach, based on use case, may support both costumers and suppliers in the bidding process. The use case approach is based on a systems functional requirement written in use cases. There is a huge interest in the approach, but it is still necessary to adjust it to different kinds of projects. This thesis studies how an approach for use case based estimation may be used to estimate web based software by testing it on the projects and compare it to other more established methods. The chosen methods are Function Points, COCOMO II early design and WEBMO. The most important aspects discussed are; information requirements, how the approaches estimates compared to how experts estimates, actual effort, which quality the approaches plan and also when the approaches can be used and who to use them. This study shows that the size measure in the use case point approach is easier to use for people without any technical knowledge, for instance a costumer in software projects, than function point. This predicts well structured use cases. This study suggests guidelines for use case structuring, when the use cases are very detailed from the beginning. Further, the study shows that cost drivers bound to quality of code like maintenance, reuse, documentation, complexity and experience affects the effort to a greater extent. The study looks at how such cost drivers can be used in web projects. In this case the estimation approaches estimated as good as, or better than the experts. This indicates that the use case point approach may support both costumer and developer in a bidding process where a use case model describes the functional requirements

    Incorporating statistical and machine learning techniques into the optimization of correction factors for software development effort estimation

    Get PDF
    Accurate effort estimation is necessary for efficient management of software development projects, as it relates to human resource management. Ensemble methods, which employ multiple statistical and machine learning techniques, are more robust, reliable, and accurate effort estimation techniques. This study develops a stacking ensemble model based on optimization correction factors by integrating seven statistical and machine learning techniques (K-nearest neighbor, random forest, support vector regression, multilayer perception, gradient boosting, linear regression, and decision tree). The grid search optimization method is used to obtain valid search ranges and optimal configuration values, allowing more accurate estimation. We conducted experiments to compare the proposed method with related methods, such as use case points-based single methods, optimization correction factors-based single methods, and ensemble methods. The estimation accuracies of the methods were evaluated using statistical tests and unbiased performance measures on a total of four datasets, thus demonstrating the effectiveness of the proposed method more clearly. The proposed method successfully maintained its estimation accuracy across the four experimental datasets and gave the best results in terms of the sum of squares errors, mean absolute error, root mean square error, mean balance relative error, mean inverted balance relative error, median of magnitude of relative error, and percentage of prediction (0.25). The p-value for the t-test showed that the proposed method is statistically superior to other methods in terms of estimation accuracy. The results show that the proposed method is a comprehensive approach for improving estimation accuracy and minimizing project risks in the early stages of software development.Faculty of Applied Informatics, Tomas Bata University, (IGA/CebiaTech/2022/001, RVO/FAI/2021/002)Tomas Bata University in Zlin [RVO/FAI/2021/002, IGA/CebiaTech/2022/001

    Software project economics: A roadmap

    Get PDF
    The objective of this paper is to consider research progress in the field of software project economics with a view to identifying important challenges and promising research directions. I argue that this is an important sub-discipline since this will underpin any cost-benefit analysis used to justify the resourcing, or otherwise, of a software project. To accomplish this I conducted a bibliometric analysis of peer reviewed research articles to identify major areas of activity. My results indicate that the primary goal of more accurate cost prediction systems remains largely unachieved. However, there are a number of new and promising avenues of research including: how we can combine results from primary studies, integration of multiple predictions and applying greater emphasis upon the human aspects of prediction tasks. I conclude that the field is likely to remain very challenging due to the people-centric nature of software engineering, since it is in essence a design task. Nevertheless the need for good economic models will grow rather than diminish as software becomes increasingly ubiquitous

    The consistency of empirical comparisons of regression and analogy-based software project cost prediction

    Get PDF
    OBJECTIVE - to determine the consistency within and between results in empirical studies of software engineering cost estimation. We focus on regression and analogy techniques as these are commonly used. METHOD – we conducted an exhaustive search using predefined inclusion and exclusion criteria and identified 67 journal papers and 104 conference papers. From this sample we identified 11 journal papers and 9 conference papers that used both methods. RESULTS – our analysis found that about 25% of studies were internally inconclusive. We also found that there is approximately equal evidence in favour of, and against analogy-based methods. CONCLUSIONS – we confirm the lack of consistency in the findings and argue that this inconsistent pattern from 20 different studies comparing regression and analogy is somewhat disturbing. It suggests that we need to ask more detailed questions than just: “What is the best prediction system?

    Predicting and Evaluating Software Model Growth in the Automotive Industry

    Full text link
    The size of a software artifact influences the software quality and impacts the development process. In industry, when software size exceeds certain thresholds, memory errors accumulate and development tools might not be able to cope anymore, resulting in a lengthy program start up times, failing builds, or memory problems at unpredictable times. Thus, foreseeing critical growth in software modules meets a high demand in industrial practice. Predicting the time when the size grows to the level where maintenance is needed prevents unexpected efforts and helps to spot problematic artifacts before they become critical. Although the amount of prediction approaches in literature is vast, it is unclear how well they fit with prerequisites and expectations from practice. In this paper, we perform an industrial case study at an automotive manufacturer to explore applicability and usability of prediction approaches in practice. In a first step, we collect the most relevant prediction approaches from literature, including both, approaches using statistics and machine learning. Furthermore, we elicit expectations towards predictions from practitioners using a survey and stakeholder workshops. At the same time, we measure software size of 48 software artifacts by mining four years of revision history, resulting in 4,547 data points. In the last step, we assess the applicability of state-of-the-art prediction approaches using the collected data by systematically analyzing how well they fulfill the practitioners' expectations. Our main contribution is a comparison of commonly used prediction approaches in a real world industrial setting while considering stakeholder expectations. We show that the approaches provide significantly different results regarding prediction accuracy and that the statistical approaches fit our data best
    • …
    corecore