7 research outputs found

    Cost estimation in agile development projects

    Get PDF
    One of the key measures of the resilience of a project is its ability to reach completion on time and on budget, regardless of the turbulent and uncertain environment it may operate within. Cost estimation and tracking are therefore paramount when developing a system. Cost estimation has long been a difficult task in systems development, and although much research has focused on traditional methods, little is known about estimation in the agile method arena. This is ironic given that the reduction of cost and development time is the driving force behind the emergence of the agile method paradigm. This study investigates the applicability of current estimation techniques to more agile development approaches by focusing on four case studies of agile method use across different organisations. The study revealed that estimation inaccuracy was a less frequent occurrence for these companies. The frequency with which estimates are required on agile projects, typically at the beginning of each iteration, meant that the companies found estimation easier than when traditional approaches were used. The main estimation techniques used were expert knowledge and analogy to past projects. A number of recommendations can be drawn from the research: estimation models are not a necessary component of the process; fixed price budgets can prove beneficial for both developers and customers; and experience and past project data should be documented and used to aid the estimation of subsequent projects

    ESTIMATION ACCURACY IN LARGE IS PROGRAMS - INSIGHTS FROM A DESCRIPTIVE CASE STUDY

    Get PDF
    Information systems (IS) projects are famous for experiencing severe cost overruns, which amongst others are often caused by inaccurate ex-ante cost estimations. Against this background, this article presents a descriptive case study located in an IS transformation program at a major German financial services provider. In this case study, a multi-stage cost estimation process, which was applied to 79 IS projects, is described and the estimation accuracy of the cost estimations of all IS projects is determined using different estimation accuracy measures: Estimating Quality Factor, Forecast Error, and Mean Absolute Percentage Error. Depending on the concrete estimation accuracy measure used for the evaluation, the overall estimation quality of the program turns out to be evaluated as good or at least average “ which seems to be contrary to most studies in scientific literature. However, the results further reveal that the estimation accuracy also depends on the estimation accuracy measure chosen for the evaluation. These differing judgements are discussed from a management perspective

    Elicitation and management of user requirements in market-driven software development

    Get PDF
    Market-driven software development companies experience challenges in requirements management that many traditional requirements engineering methods and techniques do not acknowledge. Large markets, limited contact with end users, and strong competition forces the market-driven software development company to constantly invent new, selling requirements, frequently release new versions with an accompanying pressure of short time-to-market, and take both the technical and financial risks of development. This thesis presents empirical results from case studies in requirements elicitation and management at a software development company. The results include techniques to explore, understand, and handle bottlenecks in the requirements process where requirements continuously arrive at a high rate from many different stakeholders. Through simulation of the requirements process, potential bottlenecks are identified at an early stage, and fruitless improvement attempts may be avoided. Several techniques are evaluated and recommended to support the market-driven organisation in order to increase software quality and avoid process overload situations. It is shown that a quick and uncomplicated in-house usability evaluation technique, an improved heuristic evaluation, may be adequate to get closer to customer satisfaction. Since needs and opportunities differ between markets, a distributed prioritisation technique is suggested that will help the organisation to pick the most cost-beneficial and customer satisfying requirements for development. Finally, a technique based on automated natural language analysis is investigated with the aim to help resolve congestion in the requirements engineering process, yet retaining ideas that may bring a competitive advantage

    A Subjective Effort Estimation Experiment

    No full text
    Abstract Effort estimation is difficult in general, and in software development it becomes even more complicated if the software process is changed. In this paper a number of alternative interview-based effort estimation methods is presented. The main focus of the paper is to present an experiment in which software engineers were asked to use different methods to estimate the actual effort it would take to perform a number of tasks. The result from the subjective data is compared with the actual outcome from performing the tasks

    Factors systematically associated with errors in subjective estimates of software development effort: The stability of expert judgment

    Get PDF
    Software metric-based estimation of project development effort is most often performed by expert judgment rather than by using an empirically derived model (although such may be used by the expert to assist their decision). One question that can be asked about these estimates is how stable are they with respect to characteristics of the development process and product? This stability can be assessed in relation to the degree to which the project has advanced over time, the type of module for which the estimate is being made, and the characteristics of that module. In this paper we examine a set of expert-derived estimates for the effort required to develop a collection of modules from a large health-care system. Statistical tests are used to identify relationships between the type (screen or report) and characteristics of modules and the likelihood of the associated development effort being under-estimated, approximately correct, or over-estimated. Distinct relationships are found that suggest that the estimation process being examined was not unbiased to such characteristics.Unpublished[1] A. Albrecht and J. G. Jr. Software function, source lines of code, and development effort prediction: a software science validation. IEEE Transactions on Software Engineering, 9(6):639–648, 1983. [2] B. Boehm. Software Engineering Economics. Prentice-Hall, Englewood Cliffs NJ, USA, 1981. [3] A. Gray and S. MacDonell. Fuzzy logic for software metric models throughout the development life cycle. In Proceedings of the 1999 Annual Meeting of the North American Fuzzy Information Processing Society - NAFIPS, New York NY, USA, 1999. IEEE Computer Society Press. To appear. [4] F. Heemstra and R. Kusters. Function point analysis: evaluation of a software cost estimation model. European Journal of Information Systems, 1(4):229–237, 1991. [5] M. Host and C. Wohlin. A subjective effort estimation experiment. Information and Software Technology, 39:755–762, 1997. [6] R. Hughes. Expert judgement as an estimating method. Information and Software Technology, 38:67–75, 1996. [7] W. Humphrey. A Discipline for Software Engineering. Addison Wesley, 1995. [8] A. Lederer and J. Prasad. Nine management guidelines for better cost estimating. Communications of the ACM, 35(2):51–59, 1992. [9] A. Lederer and J. Prasad. A causal model for software cost estimating error. IEEE Transactions on Software Engineering, 24(2):137–148, 1998. [10] R. Meli. Early function points: a new estimation method for software projects. In Procceedings ESCOM97, Berlin, 1997. [11] T. Mukhopadhyay, S. Vicinanza, and M. Prietula. Examining the feasibility of a case-based reasoning model for software effort estimation. MIS Quarterly, pages 155–171, June 1992. [12] M. Shepperd and C. Schofield. Estimating software project effort using analogies. IEEE Transactions on Software Engineering, 23(12):736–743, 1997. [13] E. Stensrud and I. Myrtveit. Human performance estimating with analogy and regression models: an empirical validation. In Proceedings of the Fifth International Software Metrics Symposium (Metrics’98), pages 205–213, Los Alamitos, California, 1998. IEEE Computer Society Press
    corecore