124,045 research outputs found

    Selecting Best Practices for Effort Estimation

    Full text link

    Use Case Point Approach Based Software Effort Estimation using Various Support Vector Regression Kernel Methods

    Full text link
    The job of software effort estimation is a critical one in the early stages of the software development life cycle when the details of requirements are usually not clearly identified. Various optimization techniques help in improving the accuracy of effort estimation. The Support Vector Regression (SVR) is one of several different soft-computing techniques that help in getting optimal estimated values. The idea of SVR is based upon the computation of a linear regression function in a high dimensional feature space where the input data are mapped via a nonlinear function. Further, the SVR kernel methods can be applied in transforming the input data and then based on these transformations, an optimal boundary between the possible outputs can be obtained. The main objective of the research work carried out in this paper is to estimate the software effort using use case point approach. The use case point approach relies on the use case diagram to estimate the size and effort of software projects. Then, an attempt has been made to optimize the results obtained from use case point analysis using various SVR kernel methods to achieve better prediction accuracy.Comment: 13 pages, 6 figures, 11 Tables, International Journal of Information Processing (IJIP

    Scope Management of Non-Functional Requirements

    Get PDF
    In order to meet commitments in software projects, a realistic assessment must be made of project scope. Such an assessment relies on the availability of knowledge on the user-defined project requirements and their effort estimates and priorities, as well as their risk. This knowledge enables analysts, managers and software engineers to identify the most significant requirements from the list of requirements initially defined by the user. In practice, this scope assessment is applied to the Functional Requirements (FRs) provided by users who are unaware of, or ignore, the Non-Functional Requirements (NFRs). This paper presents ongoing research which aims at managing NFRs during the software development process. Establishing the relative priority of each NFR, and obtaining a rough estimate of the effort and risk associated with it, is integral to the software development process and to resource management. Our work extends the taxonomy of the NFR framework by integrating the concept of the "hardgoal". A functional size measure of NFRs is applied to facilitate the effort estimation process. The functional size measurement method we have chosen is COSMICFFP, which is theoretically sound and the de facto standard in the software industry

    Deriving Models for Software Project Effort Estimation By Means of Genetic Programming

    Get PDF
    Software engineering, effort estimation, genetic programming, symbolic regression. This paper presents the application of a computational intelligence methodology in effort estimation for software projects. Namely, we apply a genetic programming model for symbolic regression; aiming to produce mathematical expressions that (1) are highly accurate and (2) can be used for estimating the development effort by revealing relationships between the project’s features and the required work. We selected to investigate the effectiveness of this methodology into two software engineering domains. The system was proved able to generate models in the form of handy mathematical expressions that are more accurate than those found in literature.

    Model parameter estimation and uncertainty analysis: a report of the ISPOR-SMDM modeling good research practices task force working group - 6

    Get PDF
    A model’s purpose is to inform medical decisions and health care resource allocation. Modelers employ quantitative methods to structure the clinical, epidemiological, and economic evidence base and gain qualitative insight to assist decision makers in making better decisions. From a policy perspective, the value of a model-based analysis lies not simply in its ability to generate a precise point estimate for a specific outcome but also in the systematic examination and responsible reporting of uncertainty surrounding this outcome and the ultimate decision being addressed. Different concepts relating to uncertainty in decision modeling are explored. Stochastic (first-order) uncertainty is distinguished from both parameter (second-order) uncertainty and from heterogeneity, with structural uncertainty relating to the model itself forming another level of uncertainty to consider. The article argues that the estimation of point estimates and uncertainty in parameters is part of a single process and explores the link between parameter uncertainty through to decision uncertainty and the relationship to value-of-information analysis. The article also makes extensive recommendations around the reporting of uncertainty, both in terms of deterministic sensitivity analysis techniques and probabilistic methods. Expected value of perfect information is argued to be the most appropriate presentational technique, alongside cost-effectiveness acceptability curves, for representing decision uncertainty from probabilistic analysis
    • …
    corecore