1,386 research outputs found

    SOFTWARE DEVELOPMENT COST ESTIMATION METHODS AND RESEARCH TRENDS

    Get PDF
    Early estirnation o f project size and completion time is essential for successful project plan- ning and tracking. Multiple methods have been proposed to estimate software size and cost parameters. Suitability of the estirnation methods depends on many factors like software application domain, product complexity, availability of historical data, team expertise etc. Most common and widely used estirnation technicjues are described and analyzed. Current research trends in software estirnation cost are also presented

    A new estimation methodology for reusable component-based software development projects

    Get PDF
    Bibliograhy: leaves 118-121.Estimating the duration of software development projects is a difficult task. There are many factors that can derail software projects. However, estimation forms the fundamental part of planning and costing any project and is therefore very necessary. While several formal estimation methodologies exist, they all exhibit weaknesses in one form or another. The most established methodologies are based on early software development methods and it is questionable as to whether they can still address more modern development methods such as reusable component-based programming. Some researchers believe not and have proposed new methodologies that attempt to achieve this. Thus what is needed is a methodology that takes into account modern component-based development practices and, as a result, provides acceptable accuracy for the software organisation. This dissertation attempts to uniquely satisfy both of these requirements

    A COMPARISON OF ALBRECHT\u27S FUNCTION POINT AND SYMONS\u27 MARK II METRICS

    Get PDF
    Software system size provides a basis for software cost estimation and management during software development The most widely used product size metric for the specification level is Albrecht\u27s Function Point Analysis (FPA). Symons has suggested an alternative for this metric called the Mark lI metric. This metric is simpler, more easily scalable, and better takes into account the complexity of internal processing. Moreover, it suggests different size values in cases where the measured systems differ in terms of system interfaces. One problem in using these metrics has been that there are no tools that can be used to calculate them during the specification phase. To alleviate this we demonstrate how these metrics can be automatically calculated from Structured Analysis descriptions. Another problem has been that there are no reliable comparisons of these metrics based on sufficient statistical samples of system size measures. In this paper we address this problem by carrying out preliminary comparisons of these metrics. The analysis is based on a randomly generated statistical sample of dataflow diagrams. These diagrams are automatically analyzed using our prototype measurement system using both FPA and the Mark II metric. The statistical analysis of the results shows that Mark II correlates reasonably well with Function Points if some adjustments are done to the Mark II metric. In line with Symons\u27s discussion our analysis points out that the size of correlation depends on the measured system type. Our results also show that we can derive useful size metrics for higher level specifications and that these metrics can be easily automated in CASE tools. Because the obtained results are based on simulation, in the future they must be corroborated with real life industrial data

    Functional Size Measurement Tool-based Approach for Mobile Game

    Get PDF
    Nowadays, software effort estimation plays an important role in software project management due to its extensive use in industry to monitor progress, and performance, determine overall productivity and assist in project planning. After the success of methods such as IFPUG Function Point Analysis, MarkII Function Point Analysis, and COSMIC Full Function Points, several other extension methods have been introduced to be adopted in software projects. Despite the efficiency in measuring the software cost, software effort estimation, unfortunately, is facing several issues; it requires some knowledge, effort, and a significant amount of time to conduct the measurement, thus slightly ruining the advantages of this approach. This paper demonstrates a functional size measurement tool, named UML Point tool, that utilizes the concept of IFPUG Function Point Analysis directly to Unified Modeling Language (UML) model. The tool allows the UML eXchange Format (UXF) file to decode the UML model of mobile game requirement and extract the diagrams into component complexity, object interface complexity, and sequence diagram complexity, according to the defined measurement rules. UML Point tool then automatically compute the functional size, effort, time, human resources, and total development cost of mobile game. Besides, this paper also provides a simple case study to validate the tool. The initial results proved that the tool could be useful to improve estimation accuracy for mobile game application development and found to be reliable to be applied in the mobile game industry

    Cocomo II as productivity measurement: a case study at KBC.

    Get PDF
    Software productivity is generally measured as the ratio of size over effort, whereby several techniques exist to measure the size. In this paper, we propose the innovative approach to use an estimation model as productivity measurement. This approach is applied in a case-study at the ICT-department of a bank and insurance company. The estimation model, in this case Cocomo II, is used as the norm to judge about productivity of application development projects. This research report describes on the one hand the set-up process of the measurement environment and on the other hand the measurement results. To gain insight in the measurement data, we developed a report which makes it possible to identify productivity improvement areas in the development process of the case-study company.

    Optimization of indoor air quality towards the control of mould formation by taguchi method

    Get PDF
    The formation of mould in an indoor environment is closely related to the poor Indoor Air Quality (IAQ) which can lead to various adverse health effects such as Sick Building Syndrome (SBS) and Building Related Illness (BRI). Hence, this study was conducted to investigate the relationship between mould formation and IAQ parameters in FKAAS building. The optimization of physical IAQ parameters such as air temperature (A), relative humidity (B) and air movement (C) is conducted by Taguchi Method with L9 Orthogonal Array (OA) at 3 different levels. The response output being measured is the level of carbon dioxide (CO2) and the noise factor was time at morning and evening. The data obtained has been referred to Malaysia Standard (ICOP-2010) and ASHRAE Standard 55-2013 to verify the IAQ contamination level. From the investigation, it was found that the optimized parameters are within the acceptable range of standard and the most significant factor towards the IAQ was air movement, followed by relative humidity and air temperature. The best optimized parameters can be noted as (A:1; B:3; C:3) which is air movement at 0.195 m/s, 61% of relative humidity and air temperature of 25.77 oC. As a conclusion, Taguchi method has proven to be a powerful tool to generate robust physical parameter of IAQ, regardless of time change. In addition, the IAQ parameter definitely influence the formation of mould, and was proven by various signs of visible mould formation which indicates the unhealthy state for the occupant to stay

    ESTIMATING EFFORT FOR LOW-CODE APPLICATIONS

    Get PDF
    Two issues continually plaguing the software industry are software size calculation and project effort estimation. Incorrect estimates may lead to inappropriate allocation of resources (people), shortage of time, insufficient funds, and possibly project failure. The purpose of this study is to explore current methods of software effort estimation and the applicability to low-code application development. The study seeks to answer the research question: Can the current methods be used to estimate effort for applications built with low-code platforms? The goal is to analyze some of the popular estimation methods to see if they can be applied to low-code application development

    A Lightweight Size Estimation Approach for Embedded System using COSMIC Functional Size Measurement

    Get PDF
    Functional Size Measurement (FSM) is an important component of a software project that provides information for estimating the effort required to develop the measured software. Although the embedded software is time-consuming to develop, COSMIC FSM can be estimated to get more accurate function size. The traditional Function Point methods are designed to measure only business application domain and are problematic in the real-time domain. As a result, COSMIC Functional Size Measurement (FSM) method is designed to measure both application domains. The design diagrams such as UML, SysML and the well-defined FSM procedure must use to accurately measure the functional size of embedded system. We have already developed the generation model based on SysML metamodel with an example of elevator control system. In this paper, we applied the generation model that is the classification of the instance level of object based on UML metamodel. After that, this paper also showed the mapping rules which mapped between the generation model and COSMIC FSM to estimate the functional size of embedded software with the case study of cooker system. This paper also proposed the light weight generation method of COSMIC FSM by using the generation model

    Towards making functional size measurement easily usable in practice

    Get PDF
    Functional Size Measurement methods \u2013like the IFPUG Function Point Analysis and COSMIC methods\u2013 are widely used to quantify the size of applications. However, the measurement process is often too long or too expensive, or it requires more knowledge than available when development effort estimates are due. To overcome these problems, simplified measurement methods have been proposed. This research explores easily usable functional size measurement method, aiming to improve efficiency, reduce difficulty and cost, and make functional size measurement widely adopted in practice. The first stage of the research involved the study of functional size measurement methods (in particular Function Point Analysis and COSMIC), simplified methods, and measurement based on measurement-oriented models. Then, we modeled a set of applications in a measurement-oriented way, and obtained UML models suitable for functional size measurement. From these UML models we derived both functional size measures and object-oriented measures. Using these measures it was possible to: 1) Evaluate existing simplified functional size measurement methods and derive our own simplified model. 2) Explore whether simplified method can be used in various stages of modeling and evaluate their accuracy. 3) Analyze the relationship between functional size measures and object oriented measures. In addition, the conversion between FPA and COSMIC was studied as an alternative simplified functional size measurement process. Our research revealed that: 1) In general it is possible to size software via simplified measurement processes with acceptable accuracy. In particular, the simplification of the measurement process allows the measurer to skip the function weighting phases, which are usually expensive, since they require a thorough analysis of the details of both data and operations. The models obtained from out dataset yielded results that are similar to those reported in the literature. All simplified measurement methods that use predefined weights for all the transaction and data types identified in Function Point Analysis provided similar results, characterized by acceptable accuracy. On the contrary, methods that rely on just one of the elements that contribute to functional size tend to be quite inaccurate. In general, different methods showed different accuracy for Real-Time and non Real-Time applications. 2) It is possible to write progressively more detailed and complete UML models of user requirements that provide the data required by the simplified COSMIC methods. These models yield progressively more accurate measures of the modeled software. Initial measures are based on simple models and are obtained quickly and with little effort. As V models grow in completeness and detail, the measures increase their accuracy. Developers that use UML for requirements modeling can obtain early estimates of the applications\u2018 sizes at the beginning of the development process, when only very simple UML models have been built for the applications, and can obtain increasingly more accurate size estimates while the knowledge of the products increases and UML models are refined accordingly. 3) Both Function Point Analysis and COSMIC functional size measures appear correlated to object-oriented measures. In particular, associations with basic object- oriented measures were found: Function Points appear associated with the number of classes, the number of attributes and the number of methods; CFP appear associated with the number of attributes. This result suggests that even a very basic UML model, like a class diagram, can support size measures that appear equivalent to functional size measures (which are much harder to obtain). Actually, object-oriented measures can be obtained automatically from models, thus dramatically decreasing the measurement effort, in comparison with functional size measurement. In addition, we proposed conversion method between Function Points and COSMIC based on analytical criteria. Our research has expanded the knowledge on how to simplify the methods for measuring the functional size of the software, i.e., the measure of functional user requirements. Basides providing information immediately usable by developers, the researchalso presents examples of analysis that can be replicated by other researchers, to increase the reliability and generality of the results
    corecore