242 research outputs found

    Probabilistic Sensitivity Analysis in Health Economics

    Get PDF
    Health economic evaluations have recently built upon more advanced statistical decision-theoretic foundations and nowadays it is officially required that uncertainty about both parameters and observable variables be taken into account thoroughly, increasingly often by means of Bayesian methods. Among these, Probabilistic Sensitivity Analysis (PSA) has assumed a predominant role and Cost Effectiveness Acceptability Curves (CEACs) are established as the most important tool. The objective of this paper is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. We advocate here the use of an integrated vision that is based on the value of information analysis, a procedure that is well grounded in the theory of decision under uncertainty, and criticise the indiscriminate use of other approaches to sensitivity analysis

    Wisdom of the crowd from unsupervised dimension reduction

    Get PDF
    Wisdom of the crowd, the collective intelligence derived from responses of multiple human or machine individuals to the same questions, can be more accurate than each individual, and improve social decision-making and prediction accuracy. This can also integrate multiple programs or datasets, each as an individual, for the same predictive questions. Crowd wisdom estimates each individual's independent error level arising from their limited knowledge, and finds the crowd consensus that minimizes the overall error. However, previous studies have merely built isolated, problem-specific models with limited generalizability, and mainly for binary (yes/no) responses. Here we show with simulation and real-world data that the crowd wisdom problem is analogous to one-dimensional unsupervised dimension reduction in machine learning. This provides a natural class of crowd wisdom solutions, such as principal component analysis and Isomap, which can handle binary and also continuous responses, like confidence levels, and consequently can be more accurate than existing solutions. They can even outperform supervised-learning-based collective intelligence that is calibrated on historical performance of individuals, e.g. penalized linear regression and random forest. This study unifies crowd wisdom and unsupervised dimension reduction, and thereupon introduces a broad range of highly-performing and widely-applicable crowd wisdom methods. As the costs for data acquisition and processing rapidly decrease, this study will promote and guide crowd wisdom applications in the social and natural sciences, including data fusion, meta-analysis, crowd-sourcing, and committee decision making.Comment: 12 pages, 4 figures. Supplementary in sup folder of source files. 5 sup figures, 2 sup table

    Retrospective-prospective symmetry in the likelihood and Bayesian analysis of case-control studies

    Get PDF
    Prentice & Pyke (1979) established that the maximum likelihood estimate of an odds-ratio in a case-control study is the same as would be found by running a logistic regression: in other words, for this specific target the incorrect prospective model is inferentially equivalent to the correct retrospective model. Similar results have been obtained for other models, and conditions have also been identified under which the corresponding Bayesian property holds, namely that the posterior distribution of the odds-ratio be the same, whether computed using the prospective or the retrospective likelihood. Here we demonstrate how these results follow directly from certain parameter independence properties of the models and priors, and identify prior laws that support such reverse analysis, for both standard and stratified designs

    Computational algebraic methods in efficient estimation

    Full text link
    A strong link between information geometry and algebraic statistics is made by investigating statistical manifolds which are algebraic varieties. In particular it it shown how first and second order efficient estimators can be constructed, such as bias corrected Maximum Likelihood and more general estimators, and for which the estimating equations are purely algebraic. In addition it is shown how Gr\"obner basis technology, which is at the heart of algebraic statistics, can be used to reduce the degrees of the terms in the estimating equations. This points the way to the feasible use, to find the estimators, of special methods for solving polynomial equations, such as homotopy continuation methods. Simple examples are given showing both equations and computations. *** The proof of Theorem 2 was corrected by the latest version. Some minor errors were also corrected.Comment: 21 pages, 5 figure

    An Algebraic Characterization of Equivalent Bayesian Networks

    Full text link

    Criteria of efficiency for conformal prediction

    Get PDF
    We study optimal conformity measures for various criteria of efficiency of classification in an idealised setting. This leads to an important class of criteria of efficiency that we call probabilistic; it turns out that the most standard criteria of efficiency used in literature on conformal prediction are not probabilistic unless the problem of classification is binary. We consider both unconditional and label-conditional conformal prediction.Comment: 31 page

    Crowd Learning with Candidate Labeling: an EM-based Solution

    Get PDF
    Crowdsourcing is widely used nowadays in machine learning for data labeling. Although in the traditional case annotators are asked to provide a single label for each instance, novel approaches allow annotators, in case of doubt, to choose a subset of labels as a way to extract more information from them. In both the traditional and these novel approaches, the reliability of the labelers can be modeled based on the collections of labels that they provide. In this paper, we propose an Expectation-Maximization-based method for crowdsourced data with candidate sets. Iteratively the likelihood of the parameters that model the reliability of the labelers is maximized, while the ground truth is estimated. The experimental results suggest that the proposed method performs better than the baseline aggregation schemes in terms of estimated accuracy.BES-2016-078095 SVP-2014-068574 IT609-13 TIN2016-78365-

    Determining Principal Component Cardinality through the Principle of Minimum Description Length

    Full text link
    PCA (Principal Component Analysis) and its variants areubiquitous techniques for matrix dimension reduction and reduced-dimensionlatent-factor extraction. One significant challenge in using PCA, is thechoice of the number of principal components. The information-theoreticMDL (Minimum Description Length) principle gives objective compression-based criteria for model selection, but it is difficult to analytically applyits modern definition - NML (Normalized Maximum Likelihood) - to theproblem of PCA. This work shows a general reduction of NML prob-lems to lower-dimension problems. Applying this reduction, it boundsthe NML of PCA, by terms of the NML of linear regression, which areknown.Comment: LOD 201

    Decentralized learning with budgeted network load using Gaussian copulas and classifier ensembles

    Get PDF
    We examine a network of learners which address the same classification task but must learn from different data sets. The learners cannot share data but instead share their models. Models are shared only one time so as to preserve the network load. We introduce DELCO (standing for Decentralized Ensemble Learning with COpulas), a new approach allowing to aggregate the predictions of the classifiers trained by each learner. The proposed method aggregates the base classifiers using a probabilistic model relying on Gaussian copulas. Experiments on logistic regressor ensembles demonstrate competing accuracy and increased robustness in case of dependent classifiers. A companion python implementation can be downloaded at https://github.com/john-klein/DELC

    Bayesian Networks for Max-linear Models

    Full text link
    We study Bayesian networks based on max-linear structural equations as introduced in Gissibl and Kl\"uppelberg [16] and provide a summary of their independence properties. In particular we emphasize that distributions for such networks are generally not faithful to the independence model determined by their associated directed acyclic graph. In addition, we consider some of the basic issues of estimation and discuss generalized maximum likelihood estimation of the coefficients, using the concept of a generalized likelihood ratio for non-dominated families as introduced by Kiefer and Wolfowitz [21]. Finally we argue that the structure of a minimal network asymptotically can be identified completely from observational data.Comment: 18 page
    corecore