723,122 research outputs found

    Scope Management of Non-Functional Requirements

    Get PDF
    In order to meet commitments in software projects, a realistic assessment must be made of project scope. Such an assessment relies on the availability of knowledge on the user-defined project requirements and their effort estimates and priorities, as well as their risk. This knowledge enables analysts, managers and software engineers to identify the most significant requirements from the list of requirements initially defined by the user. In practice, this scope assessment is applied to the Functional Requirements (FRs) provided by users who are unaware of, or ignore, the Non-Functional Requirements (NFRs). This paper presents ongoing research which aims at managing NFRs during the software development process. Establishing the relative priority of each NFR, and obtaining a rough estimate of the effort and risk associated with it, is integral to the software development process and to resource management. Our work extends the taxonomy of the NFR framework by integrating the concept of the "hardgoal". A functional size measure of NFRs is applied to facilitate the effort estimation process. The functional size measurement method we have chosen is COSMICFFP, which is theoretically sound and the de facto standard in the software industry

    ESTIMASI BIAYA PEMBUATAN PERANGKAT LUNAK MENGGUNAKAN METODE COCOMO II PADA SISTEM INFORMASI PELAPORAN KEGIATAN PEMBANGUNAN

    Get PDF
    Nowadays, software is absolutely important for individual or company in many matters. Software design or development is done based on future or certain condition. So, it is important to understand these conditions to calculate cost and duration of a software project. This research discusses on cost estimation method of software project, this is COCOMO II (Constructive Cost Model). COCOMO II has three submodels i.e. Application Composition, Early Design, and Post Architecture, which has a possibility to estimate in less or complete condition of information. By using this COCOMO II estimation model, total efforts being needed to complete a software project in person month and total duration of processing or developing in month can be known. By adapting project standard value in the area with certain time, nominal value of a software project can be determined. Study case completed in Development Activity Report Information System, gives a conclusion that COCOMO II method is suitable to calculate cost (effort) and schedule (time) estimation. Size used as basic calculation is SLOC (Source Line Of Code). In this research, SLOC is determined by calculating UFP (Unadjusted Function Points), while SLOC is determined by calculating the number of source code line of the former project. Keywords : Software Cost Estimation, COCOMO II, Post Architect

    Effort Estimation Development Model for Web-Based Mobile Application Using Fuzzy Logic

    Get PDF
    Effort estimation becomes a crucial part in software development process because false effort estimation result can lead to delayed project and affect the successful of a project. This research proposes a model of effort estimation for web-based mobile application developed using object oriented approach. In the proposed model, functional size measurement of object oriented based web application named OOmFPWeb, web metric and mobile characteristic for web-based mobile application size measurement are combnined. The estimation process is done by using mamdani fuzzy logic method. To evaluate the proposed model, the comparison between OOmFPWeb as the variable that affect effort estimation for web-based mobile application and the proposed model are performed. The evaluation result shows that effort estimation for web-based mobile application with the proposed model is better than just using OOmFPWeb

    A Principled Methodology: A Dozen Principles of Software Effort Estimation

    Get PDF
    Software effort estimation (SEE) is the activity of estimating the total effort required to complete a software project. Correctly estimating the effort required for a software project is of vital importance for the competitiveness of the organizations. Both under- and over-estimation leads to undesirable consequences for the organizations. Under-estimation may result in overruns in budget and schedule, which in return may cause the cancellation of projects; thereby, wasting the entire effort spent until that point. Over-estimation may cause promising projects not to be funded; hence, harming the organizational competitiveness.;Due to the significant role of SEE for software organizations, there is a considerable research effort invested in SEE. Thanks to the accumulation of decades of prior research, today we are able to identify the core issues and search for the right principles to tackle pressing questions. For example, regardless of decades of work, we still lack concrete answers to important questions such as: What is the best SEE method? The introduced estimation methods make use of local data, however not all the companies have their own data, so: How can we handle the lack of local data? Common SEE methods take size attributes for granted, yet size attributes are costly and the practitioners place very little trust in them. Hence, we ask: How can we avoid the use of size attributes? Collection of data, particularly dependent variable information (i.e. effort values) is costly: How can find an essential subset of the SEE data sets? Finally, studies make use of sampling methods to justify a new method\u27s performance on SEE data sets. Yet, trade-off among different variants is ignored: How should we choose sampling methods for SEE experiments? ;This thesis is a rigorous investigation towards identification and tackling of the pressing issues in SEE. Our findings rely on extensive experimentation performed with a large corpus of estimation techniques on a large set of public and proprietary data sets. We summarize our findings and industrial experience in the form of 12 principles: 1) Know your domain 2) Let the Experts Talk 3) Suspect your data 4) Data Collection is Cyclic 5) Use a Ranking Stability Indicator 6) Assemble Superior Methods 7) Weighting Analogies is Over-elaboration 8) Use Easy-path Design 9) Use Relevancy Filtering 10) Use Outlier Pruning 11) Combine Outlier and Synonym Pruning 12) Be Aware of Sampling Method Trade-off

    Research Paper on Software Cost Estimation Using Fuzzy Logic

    Get PDF
    Software cost estimation is one of the biggest challenges in these days due to tremendous completion. You have to bid so close so that you can get the consignment if your cost estimation is too low are too high in that cases organization has to suffer that why it becomes very crucial to get consignment. One of the important issues in software project management is accurate and reliable estimation of software time, cost, and manpower, especially in the early phase of software development. Software attributes usually have properties of uncertainty and vagueness when they are measured by human judgment. A software cost estimation model incorporates fuzzy logic can overcome the uncertainty and vagueness of software attributes. However, determination of the suitable fuzzy rule sets for fuzzy inference system plays an important role in coming up with accurate and reliable software estimates. The objective of our research was to examine the application of applying fuzzy logic in software cost estimation that can perform more accurate result. In fuzzy logic there are various membership function for example Gaussian, triangular, trapezoidal and many more. Out of these by hit and trial method we find triangular membership function (MF) yields least MRE and MMRE and this MRE must be less than 25%. In our research this value came around 15% which is very fair enough to estimate. Cost can be found out using the equation if payment is known Cost = Effort * (Payment Month). Therefore the effort needed for a particular software project using fuzzy logic is estimated. In our research NASA (93) data set used to calculate fuzzy logic COCOMO II. From this table size of code and actual effort has been taken. In end after comparing the result we found that our proposed technique is far superior to base work

    Metode Point Kriging Untuk Estimasi Sumberdaya Bijih Besi (Fe) Menggunakan Data Assay (3D) Pada Daerah Tanjung Buli Kabupaten Halmahera Timur (Point Kriging Method for Estimation Resources of Iron Ore (Fe) Use Assay Data (3D) of Tanjung Buli Area, East Halmahera Regency)

    Full text link
    This research was conducted in the area Tanjung Buli, East Halmahera regency. This area has iron ore resources in the prospects for exploration to exploitation with data obtained using spaced borehole 25meter. Point kriging method with software SGeMS (Standard Geostatistic Modeling software) used for assessing of iron ore resource with size of block model dimensions 41x23x205 and unit blocks 25x25x1 (in meters). The results of the assessment point kriging method obtained estimated values of Fe and kriging variance value is then used for the classification of resources based on relative kriging standard deviation (RKSD) as classification of resource measured, indicated, and inferred. The results of cross-validation of the assay data-Fe-Fe estimation data from point kriging method. The results of resource assessment Fe of measured amounted to 3,081,125 tonnes, Indicated amounted to 6,878,563 tonnes and Inferred amounted to 97,781,563 tonnes. The pattern of spread showed high Fe content above 14.40% Fe dispersed randomly (varies) or local-local in units of small blocks. Correlation coefficient value showed of 0.89 so it has a strong correlation between the assay data-Fe with Fe estimation data

    A generic model for software size estimation based on component partitioning : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Software Engineering

    Get PDF
    Software size estimation is a central but under-researched area of software engineering economics. Most current cost estimation models use an estimated end-product size, in lines of code, as one of their most important input parameters. Software size, in a different sense, is also important for comparative productivity studies, often using a derived size measure, such as function points. The research reported in this thesis is an investigation into software size estimation and the calibration of derived software size measures with each other and with product size measures. A critical review of current software size metrics is presented together with a classification of these metrics into textual metrics, object counts, vector metrics and composite metrics. Within a review of current approaches to software size estimation, that includes a detailed analysis of Function Point Analysis-like approaches, a new classification of software size estimation methods is presented which is based on the type of structural partitioning of a specification or design that must be completed before the method can be used. This classification clearly reveals a number of fundamental concepts inherent in current size estimation methods. Traditional classifications of size estimation approaches are also discussed in relation to the new classification. A generic decomposition and summation model for software sizing is presented. Systems are classified into different categories and, within each category, into appropriate component type partitions. Each component type has a different size estimation algorithm based on size drivers appropriate to that particular type. Component size estimates are summed to produce partial or total system size estimates, as required. The model can be regarded as a generalization of a number of Function Point Analysis-like methods in current use. Provision is made for both comparative productivity studies using derived size measures, such as function points, and for end product size estimates using primitive size measures, such as lines of code. The nature and importance of calibration of derived measures for comparative studies is developed. System adjustment factors are also examined and a model for their analysis and application presented. The model overcomes most of the recent criticisms that have been levelled at Function Point Analysis-like methods. A model instance derived from the generic sizing model is applied to a major case study of a system of administrative applications in which a new Function Point Analysis-type metric suited to a particular software development technology is derived, calibrated and compared with Function Point Analysis. The comparison reveals much of the anatomy of Function Point Analysis and its many deficiencies when applied to this case study. The model instance is at least partially validated by application to a sample of components from later incremental developments within the same software development technology. The performance of the model instance for this technology is very good in its own right and also very much better than Function Point Analysis. The model is also applied to three other business software development technologies using the IFIP 1 International Federation for Information Processing standard inventory control and purchasing reference system. The purpose of this study is to demonstrate the applicability of the generic model to several quite different software technologies. Again, the three derived model instances show an excellent fit to the available data. This research shows that a software size estimation model which takes explicit advantage of the particular characteristics of the software technology used can give better size estimates than methods that do not take into account the component partitions that are characteristic of the software technology employed

    Potential and limitations of the ISBSG dataset in enhancing software engineering research: A mapping review

    Full text link
    Context The International Software Benchmarking Standards Group (ISBSG) maintains a software development repository with over 6000 software projects. This dataset makes it possible to estimate a project s size, effort, duration, and cost. Objective The aim of this study was to determine how and to what extent, ISBSG has been used by researchers from 2000, when the first papers were published, until June of 2012. Method A systematic mapping review was used as the research method, which was applied to over 129 papers obtained after the filtering process. Results The papers were published in 19 journals and 40 conferences. Thirty-five percent of the papers published between years 2000 and 2011 have received at least one citation in journals and only five papers have received six or more citations. Effort variable is the focus of 70.5% of the papers, 22.5% center their research in a variable different from effort and 7% do not consider any target variable. Additionally, in as many as 70.5% of papers, effort estimation is the research topic, followed by dataset properties (36.4%). The more frequent methods are Regression (61.2%), Machine Learning (35.7%), and Estimation by Analogy (22.5%). ISBSG is used as the only support in 55% of the papers while the remaining papers use complementary datasets. The ISBSG release 10 is used most frequently with 32 references. Finally, some benefits and drawbacks of the usage of ISBSG have been highlighted. Conclusion This work presents a snapshot of the existing usage of ISBSG in software development research. ISBSG offers a wealth of information regarding practices from a wide range of organizations, applications, and development types, which constitutes its main potential. However, a data preparation process is required before any analysis. Lastly, the potential of ISBSG to develop new research is also outlined.Fernández Diego, M.; González-Ladrón-De-Guevara, F. (2014). Potential and limitations of the ISBSG dataset in enhancing software engineering research: A mapping review. Information and Software Technology. 56(6):527-544. doi:10.1016/j.infsof.2014.01.003S52754456

    Comparison of Coal Reserve Estimation Methods, Case Study PT. Bukit Asam Area, South Sumatra, Indonesia

    Get PDF
    The calculation of coal reserves is influenced by the dimensions or size of the coal deposit. There are several types of coal reserve calculation methods, and the use of these methods is adjusted to existing geological conditions. Each method will produce a different amount of coal reserves, although the location is the same. Besides, the amount of coal mining that can be produced is primarily determined by the mine design, especially the optimal slope as a basis for mining pits in the coal extraction. This research aims to estimate coal reserves based on existing pit designs using a variety of methods. Data on coal thickness and topography are used as the basis for reserves estimation. Coal reserve estimation is conducted in several methods: nearest neighbor point (NNP), inverse distance weighted (IDW), and kriging using Surfer 13 software. The results of the reserves estimation indicate that kriging is the best method by providing the smallest error value with an RMSE value of 0.67 and coal reserves of 27,801,543 tons

    Predictiveness and Effectiveness of Story Points in Agile Software Development

    Get PDF
    Agile Software Development (ASD) is one of the most popular iterative software development methodologies, which takes a different approach from the conventional sequential methods. Agile methods promise a faster response to unanticipated changes during development, typically contrasted with traditional project development, which assumes that software is specifiable and predictable. Traditionally, practitioners and researchers have utilised different Functional Size Measures (FSMs) as the main cost driver to estimate the effort required to develop a project (Software Effort Estimation – SSE). However, FSM methods are not easy to use with ASD. Thus, another measure, namely Story Point (SP), has become popular in this context. SP is a relative unit representing an intuitive mixture of complexity and the required effort of a user requirement. Although recent surveys report on a growing trend toward intelligent effort estimation techniques for ASD, the adoption of these techniques is still limited in practice. Several factors limit the accuracy and adaptability of these techniques. The primary factor is the lack of enough noise-free information at the estimation time, restricting the model’s accuracy and reliability. This thesis concentrates on SEE for ASD from both the technique and data perspectives. Under this umbrella, I first evaluate two prominent state-of-the-art works for SP estimation to understand their strengths and weaknesses. I then introduce and evaluate a novel method for SP estimation based on text clustering. Next, I investigate the relationship between SP and development time by conducting a thorough empirical study. Finally, I explore the effectiveness of SP estimation methods when used to estimate the actual time. To carry out this research, I have curated the TAWOS (Tawosi Agile Web-based Open-Source) dataset, which consists of over half a million issues from Agile, open-source projects. TAWOS has been made publicly available to allow for reproduction and extension in future work
    • …
    corecore