13 research outputs found

    Making Software Cost Data Available for Meta-Analysis

    Get PDF
    In this paper we consider the increasing need for meta-analysis within empirical software engineering. However, we also note that a necessary precondition to such forms of analysis is to have both the results in an appropriate format and sufficient contextual information to avoid misleading inferences. We consider the implications in the field of software project effort estimation and show that for a sample of 12 seemingly similar published studies, the results are difficult to compare let alone combine. This is due to different reporting conventions. We argue that a protocol is required and make some suggestions as to what it should contain

    Analyzing the Non-Functional Requirements in the Desharnais Dataset for Software Effort Estimation

    Get PDF
    Studying the quality requirements (aka Non-Functional Requirements (NFR)) of a system is crucial in Requirements Engineering. Many software projects fail because of neglecting or failing to incorporate the NFR during the software life development cycle. This paper focuses on analyzing the importance of the quality requirements attributes in software effort estimation models based on the Desharnais dataset. The Desharnais dataset is a collection of eighty one software projects of twelve attributes developed by a Canadian software house. The analysis includes studying the influence of each of the quality requirements attributes, as well as the influence of all quality requirements attributes combined when calculating software effort using regression and Artificial Neural Network (ANN) models. The evaluation criteria used in this investigation include the Mean of the Magnitude of Relative Error (MMRE), the Prediction Level (PRED), Root Mean Squared Error (RMSE), Mean Error and the Coefficient of determination (R2). Results show that the quality attribute “Language” is the most statistically significant when calculating software effort. Moreover, if all quality requirements attributes are eliminated in the training stage and software effort is predicted based on software size only, the value of the error (MMRE) is doubled

    Improvement of the Compilation Process of the Italian Income Certifications: A Methodology Based on the Evaluation of the Information Content (Part 1)☆

    Get PDF
    Abstract In recent years the Italian tax system has been affected by significant changes. Important legislative reforms have revolutionized the relationship between citizens and public administration. These changes tried to simplify the relationship between citizens and public administration, reducing the burden of bureaucracy on the lives of citizens and businesses. To simplify the completion of income statement taxpayers, the Italian Revenue Agency started to use precompiled statement from 2015. However, many manual actions are still necessary to correct or complete the statement. This paper focuses on the improvement of compilation and control of income certifications in order to reduce non-compliance poured into precompiled statements. The proposed method aims to improve the process by introducing a robust procedure based on Axiomatic Design that is able to quantify the information content of a software project by measuring the information content through the Function Point technique. The developed approach will be able to prevent the generation of non-compliances. In this way, to identify possible critical situations in a proactive way and to avoid classes of non-conformities, it will be possible to optimize the data compilation process to verify compliance with the technical specifications of the Italian Revenue Agency. The goal of the proposed approach is to simplify the collection of fiscal data and create a clearer path for taxpayers

    VAF factor influence on the accuracy of the effort estimation provided by modified function points methods

    Get PDF
    The paper presents the Function Points (FP) method, which can be used for a preliminary effort estimation using limited information. Despite the potential for early use of the effort estimation, FP provides meaningful and relatively accurate results. The authors aimed to design Modified Function Points (MFP) methods based on regression model and analyze the influence of Value Adjustment Factor (VAF) on the estimation accuracy of the development effort. For research purposes was selected the ISBSG dataset. Subsequently, the original dataset was reduced according to data requirements and divided into two parts the training and the testing section (in ratio 2:1). The presented analysis was processed as a preparatory phase for further research in this area. Matlab toolboxes were used for the design and verification of discussed algorithms. © 2018, Danube Adria Association for Automation and Manufacturing, DAAAM. All rights reserved

    Potential and limitations of the ISBSG dataset in enhancing software engineering research: A mapping review

    Full text link
    Context The International Software Benchmarking Standards Group (ISBSG) maintains a software development repository with over 6000 software projects. This dataset makes it possible to estimate a project s size, effort, duration, and cost. Objective The aim of this study was to determine how and to what extent, ISBSG has been used by researchers from 2000, when the first papers were published, until June of 2012. Method A systematic mapping review was used as the research method, which was applied to over 129 papers obtained after the filtering process. Results The papers were published in 19 journals and 40 conferences. Thirty-five percent of the papers published between years 2000 and 2011 have received at least one citation in journals and only five papers have received six or more citations. Effort variable is the focus of 70.5% of the papers, 22.5% center their research in a variable different from effort and 7% do not consider any target variable. Additionally, in as many as 70.5% of papers, effort estimation is the research topic, followed by dataset properties (36.4%). The more frequent methods are Regression (61.2%), Machine Learning (35.7%), and Estimation by Analogy (22.5%). ISBSG is used as the only support in 55% of the papers while the remaining papers use complementary datasets. The ISBSG release 10 is used most frequently with 32 references. Finally, some benefits and drawbacks of the usage of ISBSG have been highlighted. Conclusion This work presents a snapshot of the existing usage of ISBSG in software development research. ISBSG offers a wealth of information regarding practices from a wide range of organizations, applications, and development types, which constitutes its main potential. However, a data preparation process is required before any analysis. Lastly, the potential of ISBSG to develop new research is also outlined.Fernández Diego, M.; González-Ladrón-De-Guevara, F. (2014). Potential and limitations of the ISBSG dataset in enhancing software engineering research: A mapping review. Information and Software Technology. 56(6):527-544. doi:10.1016/j.infsof.2014.01.003S52754456

    The usage of ISBSG data fields in software effort estimation: A systematic mapping study

    Full text link
    [EN] The International Software Benchmarking Standards Group (ISBSG) maintains a repository of data about completed software projects. A common use of the ISBSG dataset is to investigate models to estimate a software project's size, effort, duration, and cost. The aim of this paper is to determine which and to what extent variables in the ISBSG dataset have been used in software engineering to build effort estimation models. For that purpose a systematic mapping study was applied to 107 research papers, obtained after a filtering process, that were published from 2000 until the end of 2013, and which listed the independent variables used in the effort estimation models. The usage of ISBSG variables for filtering, as dependent variables, and as independent variables is described. The 20 variables (out of 71) mostly used as independent variables for effort estimation are identified and analysed in detail, with reference to the papers and types of estimation methods that used them. We propose guidelines that can help researchers make informed decisions about which ISBSG variables to select for their effort estimation models.González-Ladrón-De-Guevara, F.; Fernández-Diego, M.; Lokan, C. (2016). The usage of ISBSG data fields in software effort estimation: A systematic mapping study. Journal of Systems and Software. 113:188-215. doi:10.1016/j.jss.2015.11.040S18821511

    Using a Dynamic Domain-Specific Modeling Language for the Model-Driven Development of Cross-Platform Mobile Applications

    Get PDF
    There has been a gradual but steady convergence of dynamic programming languages with modeling languages. One area that can benefit from this convergence is modeldriven development (MDD) especially in the domain of mobile application development. By using a dynamic language to construct a domain-specific modeling language (DSML), it is possible to create models that are executable, exhibit flexible type checking, and provide a smaller cognitive gap between business users, modelers and developers than more traditional model-driven approaches. Dynamic languages have found strong adoption by practitioners of Agile development processes. These processes often rely on developers to rapidly produce working code that meets business needs and to do so in an iterative and incremental way. Such methodologies tend to eschew “throwaway” artifacts and models as being wasteful except as a communication vehicle to produce executable code. These approaches are not readily supported with traditional heavyweight approaches to model-driven development such as the Object Management Group’s Model-Driven Architecture approach. This research asks whether it is possible for a domain-specific modeling language written in a dynamic programming language to define a cross-platform model that can produce native code and do so in a way that developer productivity and code quality are at least as effective as hand-written code produced using native tools. Using a prototype modeling tool, AXIOM (Agile eXecutable and Incremental Objectoriented Modeling), we examine this question through small- and mid-scale experiments and find that the AXIOM approach improved developer productivity by almost 400%, albeit only after some up-front investment. We also find that the generated code can be of equal if not better quality than the equivalent hand-written code. Finally, we find that there are significant challenges in the synthesis of a DSML that can be used to model applications across platforms as diverse as today’s mobile operating systems, which point to intriguing avenues of subsequent research
    corecore