13,927 research outputs found

    Requirements Prioritization Based on Benefit and Cost Prediction: An Agenda for Future Research

    Get PDF
    In early phases of the software cycle, requirements prioritization necessarily relies on the specified requirements and on predictions of benefit and cost of individual requirements. This paper presents results of a systematic review of literature, which investigates how existing methods approach the problem of requirements prioritization based on benefit and cost. From this review, it derives a set of under-researched issues which warrant future efforts and sketches an agenda for future research in this area

    Experimental Study Using Functional Size Measurement in Building Estimation Models for Software Project Size

    Get PDF
    This paper reports on an experiment that investigates the predictability of software project size from software product size. The predictability research problem is analyzed at the stage of early requirements by accounting the size of functional requirements as well as the size of non-functional requirements. The experiment was carried out with 55 graduate students in Computer Science from Concordia University in Canada. In the experiment, a functional size measure and a project size measure were used in building estimation models for sets of web application development projects. The results show that project size is predictable from product size. Further replications of the experiment are, however, planed to obtain more results to confirm or disconfirm our claim

    Early Quantitative Assessment of Non-Functional Requirements

    Get PDF
    Non-functional requirements (NFRs) of software systems are a well known source of uncertainty in effort estimation. Yet, quantitatively approaching NFR early in a project is hard. This paper makes a step towards reducing the impact of uncertainty due to NRF. It offers a solution that incorporates NFRs into the functional size quantification process. The merits of our solution are twofold: first, it lets us quantitatively assess the NFR modeling process early in the project, and second, it lets us generate test cases for NFR verification purposes. We chose the NFR framework as a vehicle to integrate NFRs into the requirements modeling process and to apply quantitative assessment procedures. Our solution proposal also rests on the functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard. We extend its use for NFR testing purposes, which is an essential step for improving NFR development and testing effort estimates, and consequently for managing the scope of NFRs. We discuss the advantages of our approach and the open questions related to its design as well

    Non-functional requirements: size measurement and testing with COSMIC-FFP

    Get PDF
    The non-functional requirements (NFRs) of software systems are well known to add a degree of uncertainty to process of estimating the cost of any project. This paper contributes to the achievement of more precise project size measurement through incorporating NFRs into the functional size quantification process. We report on an initial solution proposed to deal with the problem of quantitatively assessing the NFR modeling process early in the project, and of generating test cases for NFR verification purposes. The NFR framework has been chosen for the integration of NFRs into the requirements modeling process and for their quantitative assessment. Our proposal is based on the functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard. Also in this paper, we extend the use of COSMIC-FFP for NFR testing purposes. This is an essential step for improving NFR development and testing effort estimates, and consequently for managing the scope of NFRs. We discuss the merits of the proposed approach and the open questions related to its design

    Bayesian optimisation for likelihood-free cosmological inference

    Full text link
    Many cosmological models have only a finite number of parameters of interest, but a very expensive data-generating process and an intractable likelihood function. We address the problem of performing likelihood-free Bayesian inference from such black-box simulation-based models, under the constraint of a very limited simulation budget (typically a few thousand). To do so, we adopt an approach based on the likelihood of an alternative parametric model. Conventional approaches to approximate Bayesian computation such as likelihood-free rejection sampling are impractical for the considered problem, due to the lack of knowledge about how the parameters affect the discrepancy between observed and simulated data. As a response, we make use of a strategy previously developed in the machine learning literature (Bayesian optimisation for likelihood-free inference, BOLFI), which combines Gaussian process regression of the discrepancy to build a surrogate surface with Bayesian optimisation to actively acquire training data. We extend the method by deriving an acquisition function tailored for the purpose of minimising the expected uncertainty in the approximate posterior density, in the parametric approach. The resulting algorithm is applied to the problems of summarising Gaussian signals and inferring cosmological parameters from the Joint Lightcurve Analysis supernovae data. We show that the number of required simulations is reduced by several orders of magnitude, and that the proposed acquisition function produces more accurate posterior approximations, as compared to common strategies.Comment: 16+9 pages, 12 figures. Matches PRD published version after minor modification

    On Probability and Cosmology: Inference Beyond Data?

    Get PDF
    Modern scientific cosmology pushes the boundaries of knowledge and the knowable. This is prompting questions on the nature of scientific knowledge. A central issue is what defines a 'good' model. When addressing global properties of the Universe or its initial state this becomes a particularly pressing issue. How to assess the probability of the Universe as a whole is empirically ambiguous, since we can examine only part of a single realisation of the system under investigation: at some point, data will run out. We review the basics of applying Bayesian statistical explanation to the Universe as a whole. We argue that a conventional Bayesian approach to model inference generally fails in such circumstances, and cannot resolve, e.g., the so-called 'measure problem' in inflationary cosmology. Implicit and non-empirical valuations inevitably enter model assessment in these cases. This undermines the possibility to perform Bayesian model comparison. One must therefore either stay silent, or pursue a more general form of systematic and rational model assessment. We outline a generalised axiological Bayesian model inference framework, based on mathematical lattices. This extends inference based on empirical data (evidence) to additionally consider the properties of model structure (elegance) and model possibility space (beneficence). We propose this as a natural and theoretically well-motivated framework for introducing an explicit, rational approach to theoretical model prejudice and inference beyond data

    Constraining the expansion rate of the Universe using low-redshift ellipticals as cosmic chronometers

    Full text link
    We present a new methodology to determine the expansion history of the Universe analyzing the spectral properties of early type galaxies (ETG). We found that for these galaxies the 4000\AA break is a spectral feature that correlates with the relative ages of ETGs. In this paper we describe the method, explore its robustness using theoretical synthetic stellar population models, and apply it using a SDSS sample of \sim14 000 ETGs. Our motivation to look for a new technique has been to minimise the dependence of the cosmic chronometer method on systematic errors. In particular, as a test of our method, we derive the value of the Hubble constant H0=72.6±2.8H_0 = 72.6 \pm 2.8 (stat) ±2.3\pm2.3 (syst) (68% confidence), which is not only fully compatible with the value derived from the Hubble key project, but also with a comparable error budget. Using the SDSS, we also derive, assuming w=constant, a value for the dark energy equation of state parameter w=1±0.2w = -1 \pm 0.2 (stat) ±0.3\pm0.3 (syst). Given the fact that the SDSS ETG sample only reaches z0.3z \sim 0.3, this result shows the potential of the method. In future papers we will present results using the high-redshift universe, to yield a determination of H(z) up to z1z \sim 1.Comment: 25 pages, 17 figures, JCAP accepte
    corecore