337,813 research outputs found

    User Story Software Estimation:a Simplification of Software Estimation Model with Distributed Extreme Programming Estimation Technique

    Get PDF
    Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or step in software engineering for example measuring line of codes, function point, COCOMO, or use case points. This paper proposes another estimation technique called Distributed eXtreme Programming Estimation (DXP Estimation). DXP estimation provides a basic technique for the team that using eXtreme Programming method in onsite or distributed development. According to writer knowledge this is a first estimation technique that applied into agile method in eXtreme Programming

    ESTIMASI BIAYA PEMBUATAN PERANGKAT LUNAK MENGGUNAKAN METODE COCOMO II PADA SISTEM INFORMASI PELAPORAN KEGIATAN PEMBANGUNAN

    Get PDF
    Nowadays, software is absolutely important for individual or company in many matters. Software design or development is done based on future or certain condition. So, it is important to understand these conditions to calculate cost and duration of a software project. This research discusses on cost estimation method of software project, this is COCOMO II (Constructive Cost Model). COCOMO II has three submodels i.e. Application Composition, Early Design, and Post Architecture, which has a possibility to estimate in less or complete condition of information. By using this COCOMO II estimation model, total efforts being needed to complete a software project in person month and total duration of processing or developing in month can be known. By adapting project standard value in the area with certain time, nominal value of a software project can be determined. Study case completed in Development Activity Report Information System, gives a conclusion that COCOMO II method is suitable to calculate cost (effort) and schedule (time) estimation. Size used as basic calculation is SLOC (Source Line Of Code). In this research, SLOC is determined by calculating UFP (Unadjusted Function Points), while SLOC is determined by calculating the number of source code line of the former project. Keywords : Software Cost Estimation, COCOMO II, Post Architect

    Software Cost Estimation through Entity Relationship Model

    Get PDF
    Abstract: Software Cost Estimation is essential for efficient control and management of the whole software development process. Today, Constructive Cost Model (COCOMO 11) is very popular for estimating software cost. In Constructive Cost Model lines of code and function, points are used to calculate the software size. Actually, this work represents the implementation stages but in early stages in software development, it was not easy to estimate software cost. The entity relationship model (ER Model) is very useful in requirement analysis for data concentrated systems. This paper highlights the use of Entity Relationship Model for software cost estimation. Pathway Density is ushered in. By using the Pathway Density and other factors, many regression models are built for estimating the software cost. So in this paper, Entity Relationship Model is based on estimated cost of software

    Function Point Metric Calculation and Sensitivity Analysis

    Get PDF
    Managing software during development and maintenance is one of the most troubling aspects of modem information technology. Documented historical records indicate that poorly managed projects can run into cost overruns and schedule slippages and, in some cases, may be abandoned. Careful attention to quality control by using trained specialists, modem software estimation tools, and effective planning tools can immunize software development companies against some of the common sources of software disaster. The software development process can thus be managed more thoroughly or even controlled. The size of a software document at varIOUS stages of development can be measured using a number of alternative methods. Lines of code is probably the size measure most widely known, but it has no universally accepted definition. Function points is one ofthe better-known and widely-used metrics for measuring software size. The objective of this thesis was to study and analyze the sensitivity of the function point metric. The work done consists of collecting the size in terms of the number of Lines of Code (LOC) of various programs, converting the lines of code to function points, calculating function points from the program documentation independently of the previous calculation, comparing the estimated function points and the actual one, and investigating the sensitivity of function point parameters by plotting a graph. The Line of Code was measured by the CSlZE software tool, and the function points was calculated by the COSMOS software tool. It was found that among all the function point parameters (i.e., external inputs, external outputs, internal logical files, external interface files, and external inquiries), the internal logical files is more sensitive than the rest

    A generic model for software size estimation based on component partitioning : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Software Engineering

    Get PDF
    Software size estimation is a central but under-researched area of software engineering economics. Most current cost estimation models use an estimated end-product size, in lines of code, as one of their most important input parameters. Software size, in a different sense, is also important for comparative productivity studies, often using a derived size measure, such as function points. The research reported in this thesis is an investigation into software size estimation and the calibration of derived software size measures with each other and with product size measures. A critical review of current software size metrics is presented together with a classification of these metrics into textual metrics, object counts, vector metrics and composite metrics. Within a review of current approaches to software size estimation, that includes a detailed analysis of Function Point Analysis-like approaches, a new classification of software size estimation methods is presented which is based on the type of structural partitioning of a specification or design that must be completed before the method can be used. This classification clearly reveals a number of fundamental concepts inherent in current size estimation methods. Traditional classifications of size estimation approaches are also discussed in relation to the new classification. A generic decomposition and summation model for software sizing is presented. Systems are classified into different categories and, within each category, into appropriate component type partitions. Each component type has a different size estimation algorithm based on size drivers appropriate to that particular type. Component size estimates are summed to produce partial or total system size estimates, as required. The model can be regarded as a generalization of a number of Function Point Analysis-like methods in current use. Provision is made for both comparative productivity studies using derived size measures, such as function points, and for end product size estimates using primitive size measures, such as lines of code. The nature and importance of calibration of derived measures for comparative studies is developed. System adjustment factors are also examined and a model for their analysis and application presented. The model overcomes most of the recent criticisms that have been levelled at Function Point Analysis-like methods. A model instance derived from the generic sizing model is applied to a major case study of a system of administrative applications in which a new Function Point Analysis-type metric suited to a particular software development technology is derived, calibrated and compared with Function Point Analysis. The comparison reveals much of the anatomy of Function Point Analysis and its many deficiencies when applied to this case study. The model instance is at least partially validated by application to a sample of components from later incremental developments within the same software development technology. The performance of the model instance for this technology is very good in its own right and also very much better than Function Point Analysis. The model is also applied to three other business software development technologies using the IFIP 1 International Federation for Information Processing standard inventory control and purchasing reference system. The purpose of this study is to demonstrate the applicability of the generic model to several quite different software technologies. Again, the three derived model instances show an excellent fit to the available data. This research shows that a software size estimation model which takes explicit advantage of the particular characteristics of the software technology used can give better size estimates than methods that do not take into account the component partitions that are characteristic of the software technology employed

    Estimation model for software testing

    Get PDF
    Testing of software applications and assurance of compliance have become an essential part of Information Technology (IT) governance of organizations. Over the years, software testing has evolved into a specialization with its own practices and body of knowledge. Test estimation consists of the estimation of effort and working out the cost for a particular level of testing, using various methods, tools, and techniques. An incorrect estimation often leads to inadequate amount of testing which, in turn, can lead to failures of software systems when they are deployed in organizations. This research work has first established the state of the art of software test estimation, followed by the proposal of a Unified Framework for Software Test Estimation. Using this framework, a number of detailed estimation models have been designed next for functional testing. The ISBSG database has been used to investigate the estimation of software testing. The analysis of the ISBSG data has revealed three test productivity patterns representing economies and diseconomies of scale, based on which the characteristics of the corresponding projects were investigated. The three project groups related to the three productivity patterns were found to be statistically significant, and characterised by application domain, team size, elapsed time, and rigour of verification and validation throughout development. Within each project group, the variations in test efforts could be explained by the activities carried out during the development and processes adopted for testing, in addition to functional size. Two new independent variables, the quality of the development processes (DevQ) and the quality of testing processes (TestQ), were identified as influential in the estimation models. Portfolios of estimation models were built for different data sets using combinations of the three independent variables. At estimation time, an estimator could choose the project group by mapping the characteristics of the project to be estimated to the attributes of the project group, in order to choose the model closest to it. The quality of each model has been evaluated using established criteria such as R2, Adj R2, MRE, MedMRE and Maslow’s Cp. Models have been compared using their predictive performance, adopting new criteria proposed in this research work. Test estimation models using functional size measured in COSMIC Function Points have exhibited better quality and resulted in more accurate estimation, compared to functional size measured in IFPUG Function Points. A prototype software is now developed using statistical “R” programming language, incorporating portfolios of estimation models. This test estimation tool can be used by industry and academia for estimating test efforts

    Implementation of COSMIC Function Points (CFP) as primary input to COCOMO II: Study of conversion to line of code using regression and support vector regression models

    Get PDF
    In COCOMO II, the primary input for estimating development effort in person month (PM), duration, and cost is the size of the software. Until now, there are two ways to get the size, namely (1) size is estimated using a line of code of software, and (2) size is estimated using unadjusted function points (UFP), which is one of the functional size measurements (FSM). In this study, we added a new way to obtain the size as the primary input in COCOMO II, namely with COSMIC function points (CFP). CFP has several advantages compared to other FSMs, including UFP. Therefore, like UFP, CFP is converted first to LOC, so the conversion equation must be obtained first. We applied four models to get the conversion functions: Ordinary least squares regression (OLSR), support vector regression (SVR) with linear, polynomial, and Gaussian kernel functions. The four models were applied using a dataset from small-scale business application software in Java. The results showed that PM estimation using the CFP model as the primary input produced better accuracy based on MMRE and Pred (0.25), namely 17%-19% and 67%-80%, than the UFP model on the COCOMO II of 135% and 10

    A COMPARISON OF ALBRECHT\u27S FUNCTION POINT AND SYMONS\u27 MARK II METRICS

    Get PDF
    Software system size provides a basis for software cost estimation and management during software development The most widely used product size metric for the specification level is Albrecht\u27s Function Point Analysis (FPA). Symons has suggested an alternative for this metric called the Mark lI metric. This metric is simpler, more easily scalable, and better takes into account the complexity of internal processing. Moreover, it suggests different size values in cases where the measured systems differ in terms of system interfaces. One problem in using these metrics has been that there are no tools that can be used to calculate them during the specification phase. To alleviate this we demonstrate how these metrics can be automatically calculated from Structured Analysis descriptions. Another problem has been that there are no reliable comparisons of these metrics based on sufficient statistical samples of system size measures. In this paper we address this problem by carrying out preliminary comparisons of these metrics. The analysis is based on a randomly generated statistical sample of dataflow diagrams. These diagrams are automatically analyzed using our prototype measurement system using both FPA and the Mark II metric. The statistical analysis of the results shows that Mark II correlates reasonably well with Function Points if some adjustments are done to the Mark II metric. In line with Symons\u27s discussion our analysis points out that the size of correlation depends on the measured system type. Our results also show that we can derive useful size metrics for higher level specifications and that these metrics can be easily automated in CASE tools. Because the obtained results are based on simulation, in the future they must be corroborated with real life industrial data

    A comparative case study of programming language expansion ratios : a thesis presented in partial fulfilment of the requirements for the degree of Master of Technology in Computing Technology at Massey University

    Get PDF
    An effective size estimation tool must allow an estimate to be obtained early enough to be useful. Some difficulties have been observed in using the traditional lines of code (LOC) measure in software sizing, much of which is due to the need for more detailed design information to be available before an accurate estimate can be achieved. This does not allow the result to be obtained early in the software development process. Moreover, the inherent language-dependency of LOC tends to restrict its use. An alternative measure using Function Point Analysis, developed by Albrecht, has been found to be an effective tool for sizing purposes and allows early sizing. However, the function point measure does not have a sufficient historical base of information for it to be used successfully in all cases with existing models of the software development process. Because lines of code already have a sense of "universality" as the de facto basic measure of software size, it can serve as a useful extension to function points. Language Expansion Ratios are seen as the key in providing such an extension by bridging the gap between function point and lines of code. Several sizing models have made use of expansion ratios in an effort to provide an equivalent size in lines of code in anticipation of its use in productivity studies and related cost models. However, its use has been associated with ranges of variability. The purpose of this thesis is to study Language Expansion Ratios, and the factors affecting them, for several languages based on a standard case study. This thesis surveys the prevailing issues of software size measurement and describes the role and importance of language expansion ratios. It presents the standard case study used and the methodology for the empirical study. The experimental results of measurements of the actual system are analysed and these form the basis for appropriate conclusions on the validity and applicability of the expansion ratios studied. This research shows that the use of Language Expansion Ratios is valid but it is considered inadequate when applied in its present form. This was found to be due to the weighting factors associated with the appropriate function value obtained for the different functional categories of the system
    • …
    corecore