18,246 research outputs found

    Estimating the Expected Value of Partial Perfect Information in Health Economic Evaluations using Integrated Nested Laplace Approximation

    Get PDF
    The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the "cost" of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional Gaussian Processes, often substantially. We demonstrate that the EVPPI calculated using our method for Gaussian Process regression is in line with the standard Gaussian Process regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently

    A Review of Methods for the Analysis of the Expected Value of Information

    Get PDF
    Over recent years Value of Information analysis has become more widespread in health-economic evaluations, specifically as a tool to perform Probabilistic Sensitivity Analysis. This is largely due to methodological advancements allowing for the fast computation of a typical summary known as the Expected Value of Partial Perfect Information (EVPPI). A recent review discussed some estimations method for calculating the EVPPI but as the research has been active over the intervening years this review does not discuss some key estimation methods. Therefore, this paper presents a comprehensive review of these new methods. We begin by providing the technical details of these computation methods. We then present a case study in order to compare the estimation performance of these new methods. We conclude that the most recent development based on non-parametric regression offers the best method for calculating the EVPPI efficiently. This means that the EVPPI can now be used practically in health economic evaluations, especially as all the methods are developed in parallel with

    Firm Assets and Investments in Open Source Software Products

    Get PDF
    Open source software (OSS) has recently emerged as a new way to organize innovation and product development in the software industry. This paper investigates the factors that explain the investment of profit-oriented firms in OSS products. Drawing on the resource-based theory of the firm, we focus on the role played by pre-OSS firm assets both upstream and downstream, in the software and the hardware dimensions, to explain the rate of product introduction in OSS. Using a self-assembled database of firms that have announced releases of OSS products during the period 1995-2003, we find that the intensity of product introduction can be explained by a strong position in software technology and downstream market presence in hardware. Firms with consolidated market presence in proprietary software and strong technological competences in hardware are more reluctant to shift to the new paradigm. The evidence is stronger for operating systems than for applications. The fear of cannibalization, the crucial role of absorptive capacity, and complementarities between hardware and software are plausible explanations behind our findings.Product Introduction, Open Source Software, Absorptive Capacity

    Quantifying software architecture attributes

    Get PDF
    Software architecture holds the promise of advancing the state of the art in software engineering. The architecture is emerging as the focal point of many modem reuse/evolutionary paradigms, such as Product Line Engineering, Component Based Software Engineering, and COTS-based software development. The author focuses his research work on characterizing some properties of a software architecture. He tries to use software metrics to represent the error propagation probabilities, change propagation probabilities, and requirements change propagation probabilities of a software architecture. Error propagation probability reflects the probability that an error that arises in one component of the architecture will propagate to other components of the architecture at run-time. Change propagation probability reflects, for a given pair of components A and B, the probability that if A is changed in a corrective/perfective maintenance operation, B has to be changed to maintain the overall function the system. Requirements change propagation probability reflects the likelihood that a requirement change that arises in one component of the architecture propagates to other components. For each case, the author presents the analytical formulas which mainly based on statistical theory and empirical studies. Then the author studies the correlations between analytical results and empirical results. The author also uses several metrics to quantify the properties of a Product Line Architecture, such as scoping, variability, commonality, and applicability. He presents his proposed means to measure the properties and the results of the case studies

    CSM-424- Evolutionary Complexity: Investigations into Software Flexibility

    Get PDF
    Flexibility has been hailed as a desirable quality since the earliest days of software engineering. Classic and modern literature suggest that particular programming paradigms, architectural styles and design patterns are more “flexible” than others but stop short of suggesting objective criteria for measuring such claims. We suggest that flexibility can be measured by applying notions of measurement from computational complexity to the software evolution process. We define evolution complexity (EC) metrics, and demonstrate that— (a) EC can be used to establish informal claims on software flexibility; (b) EC can be constant ����or linear� in the size of the change; (c) EC can be used to choose the most flexible software design policy. We describe a small-scale experiment designed to test these claims

    Evolutionary improvement of programs

    Get PDF
    Most applications of genetic programming (GP) involve the creation of an entirely new function, program or expression to solve a specific problem. In this paper, we propose a new approach that applies GP to improve existing software by optimizing its non-functional properties such as execution time, memory usage, or power consumption. In general, satisfying non-functional requirements is a difficult task and often achieved in part by optimizing compilers. However, modern compilers are in general not always able to produce semantically equivalent alternatives that optimize non-functional properties, even if such alternatives are known to exist: this is usually due to the limited local nature of such optimizations. In this paper, we discuss how best to combine and extend the existing evolutionary methods of GP, multiobjective optimization, and coevolution in order to improve existing software. Given as input the implementation of a function, we attempt to evolve a semantically equivalent version, in this case optimized to reduce execution time subject to a given probability distribution of inputs. We demonstrate that our framework is able to produce non-obvious optimizations that compilers are not yet able to generate on eight example functions. We employ a coevolved population of test cases to encourage the preservation of the function's semantics. We exploit the original program both through seeding of the population in order to focus the search, and as an oracle for testing purposes. As well as discussing the issues that arise when attempting to improve software, we employ rigorous experimental method to provide interesting and practical insights to suggest how to address these issues
    corecore