44,953 research outputs found

    Approximating multivariate distributions with cumulative residual entropy: a study on dynamic integrated climate-economy model

    Get PDF
    The complexity of real world decision problems is exacerbated by the need to make decisions with only partial information. How to model and make decisions in situations where only partial preference information is available is a significant challenge in decision analysis practice. In most of the studies, the probability distributions are approximated by using the mass function or density function of the decision maker. In this dissertation, our aim is to approximate representative probability and utility functions by using cumulative distribution functions instead of density/mass functions. This dissertation consists of four main sections. The first two sections introduce the proposed methods based on cumulative residual entropy, the third section compares the proposed approximation methods with the methods in information theory literature, and the final section of the dissertation discusses the cumulative impact of integrating uncertainty into the DICE model. In the first section of the dissertation, we approximate discrete joint probability distributions using first-order dependence trees as well as the recent concept of cumulative residual entropy. We formulate the cumulative residual Kullback-Leibler (KL)-divergence and the cumulative residual mutual information measures in terms of the survival function. We then show that the optimal first-order dependence tree approximation of the joint distribution using the cumulative Kullback-Leibler divergence is the one with the largest sum of cumulative residual mutual information pairs. In the second part of the dissertation, we approximate multivariate probability distributions with cumulative probability distributions rather than density functions in maximum entropy formulation. We use the discrete form of maximum cumulative residual entropy to approximate joint probability distributions to elicit multivariate probability distributions using their lower order assessments. In the third part of the dissertation, we compare several approximation methods to test the accuracy of different approximations of joint distributions with respect to the true distribution from the set of all possible distributions that match the available information. A number of methods have beeb presented in the literature for joint probability distribution approximations and we specifically compare those approximation methods that use information theory to approximate multivariate probability distributions. Finally, we study whether uncertainty significantly affects decision making especially in global warming policy decisions and integrate climatic and economic uncertainties into the DICE model to ascertain the cumulative impact of integrating uncertainty on climate change by applying cumulative residual entropy into the DICE model

    An information theory for preferences

    Full text link
    Recent literature in the last Maximum Entropy workshop introduced an analogy between cumulative probability distributions and normalized utility functions. Based on this analogy, a utility density function can de defined as the derivative of a normalized utility function. A utility density function is non-negative and integrates to unity. These two properties form the basis of a correspondence between utility and probability. A natural application of this analogy is a maximum entropy principle to assign maximum entropy utility values. Maximum entropy utility interprets many of the common utility functions based on the preference information needed for their assignment, and helps assign utility values based on partial preference information. This paper reviews maximum entropy utility and introduces further results that stem from the duality between probability and utility

    Measuring multivariate redundant information with pointwise common change in surprisal

    Get PDF
    The problem of how to properly quantify redundant information is an open question that has been the subject of much recent research. Redundant information refers to information about a target variable S that is common to two or more predictor variables Xi . It can be thought of as quantifying overlapping information content or similarities in the representation of S between the Xi . We present a new measure of redundancy which measures the common change in surprisal shared between variables at the local or pointwise level. We provide a game-theoretic operational definition of unique information, and use this to derive constraints which are used to obtain a maximum entropy distribution. Redundancy is then calculated from this maximum entropy distribution by counting only those local co-information terms which admit an unambiguous interpretation as redundant information. We show how this redundancy measure can be used within the framework of the Partial Information Decomposition (PID) to give an intuitive decomposition of the multivariate mutual information into redundant, unique and synergistic contributions. We compare our new measure to existing approaches over a range of example systems, including continuous Gaussian variables. Matlab code for the measure is provided, including all considered examples

    Bootstrap methods for the empirical study of decision-making and information flows in social systems

    Get PDF
    Abstract: We characterize the statistical bootstrap for the estimation of information theoretic quantities from data, with particular reference to its use in the study of large-scale social phenomena. Our methods allow one to preserve, approximately, the underlying axiomatic relationships of information theory—in particular, consistency under arbitrary coarse-graining—that motivate use of these quantities in the first place, while providing reliability comparable to the state of the art for Bayesian estimators. We show how information-theoretic quantities allow for rigorous empirical study of the decision-making capacities of rational agents, and the time-asymmetric flows of information in distributed systems. We provide illustrative examples by reference to ongoing collaborative work on the semantic structure of the British Criminal Court system and the conflict dynamics of the contemporary Afghanistan insurgency
    corecore