115,245 research outputs found

    Software cost estimation

    Get PDF
    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be estimated? (4) What can software project management expect from SCE models, how accurate are estimations which are made using these kind of models, and what are the pros and cons of cost estimation models

    Surveying the factors that influence maintainability: research design

    Get PDF
    We want to explore and analyse design decisions that influence maintainability of software. Software maintainability is important because the effort expended on changes and fixes in software is a major cost driver. We take an empirical, qualitative approach, by investigating cases where a change has cost more or less than comparable changes, and analysing the causes for those differences. We will use this analysis of causes as input to following research in which the individual contributions of a selection of those causes will be quantitatively analysed

    User Story Software Estimation:a Simplification of Software Estimation Model with Distributed Extreme Programming Estimation Technique

    Get PDF
    Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or step in software engineering for example measuring line of codes, function point, COCOMO, or use case points. This paper proposes another estimation technique called Distributed eXtreme Programming Estimation (DXP Estimation). DXP estimation provides a basic technique for the team that using eXtreme Programming method in onsite or distributed development. According to writer knowledge this is a first estimation technique that applied into agile method in eXtreme Programming

    The consistency of empirical comparisons of regression and analogy-based software project cost prediction

    Get PDF
    OBJECTIVE - to determine the consistency within and between results in empirical studies of software engineering cost estimation. We focus on regression and analogy techniques as these are commonly used. METHOD ā€“ we conducted an exhaustive search using predefined inclusion and exclusion criteria and identified 67 journal papers and 104 conference papers. From this sample we identified 11 journal papers and 9 conference papers that used both methods. RESULTS ā€“ our analysis found that about 25% of studies were internally inconclusive. We also found that there is approximately equal evidence in favour of, and against analogy-based methods. CONCLUSIONS ā€“ we confirm the lack of consistency in the findings and argue that this inconsistent pattern from 20 different studies comparing regression and analogy is somewhat disturbing. It suggests that we need to ask more detailed questions than just: ā€œWhat is the best prediction system?

    Predicting and Evaluating Software Model Growth in the Automotive Industry

    Full text link
    The size of a software artifact influences the software quality and impacts the development process. In industry, when software size exceeds certain thresholds, memory errors accumulate and development tools might not be able to cope anymore, resulting in a lengthy program start up times, failing builds, or memory problems at unpredictable times. Thus, foreseeing critical growth in software modules meets a high demand in industrial practice. Predicting the time when the size grows to the level where maintenance is needed prevents unexpected efforts and helps to spot problematic artifacts before they become critical. Although the amount of prediction approaches in literature is vast, it is unclear how well they fit with prerequisites and expectations from practice. In this paper, we perform an industrial case study at an automotive manufacturer to explore applicability and usability of prediction approaches in practice. In a first step, we collect the most relevant prediction approaches from literature, including both, approaches using statistics and machine learning. Furthermore, we elicit expectations towards predictions from practitioners using a survey and stakeholder workshops. At the same time, we measure software size of 48 software artifacts by mining four years of revision history, resulting in 4,547 data points. In the last step, we assess the applicability of state-of-the-art prediction approaches using the collected data by systematically analyzing how well they fulfill the practitioners' expectations. Our main contribution is a comparison of commonly used prediction approaches in a real world industrial setting while considering stakeholder expectations. We show that the approaches provide significantly different results regarding prediction accuracy and that the statistical approaches fit our data best

    An estimate of necessary effort in the development of software projects

    Get PDF
    International Workshop on Intelligent Technologies for Software Engineering (WITSE'04). 19th IEEE International Conference on Automated Software Engineering (Linz, Austria, September 20th - 25th, 2004)The estimated of the effort in the development of software projects has already been studied in the field of software engineering. For this purpose different ways of measurement such as Unes of code and function points, generally addressed to relate software size with project cost (effort) have been used. In this work we are presenting a research project that deals with this field, us'mg machine learning techniques to predict the software project cost. Several public set of data are used. The analysed sets of data only relate the effort invested in the development of software projects and the size of the resultant code. For this reason, we can say that the data used are poor. Despite that, the results obtained are good, because they improve the ones obtained in previous analyses. In order to get results closer to reality we should find data sets of a bigger size that take into account more variables, thus offering more possibilities to obtain solutions in a more efficient way.Publicad
    • ā€¦
    corecore