64,570 research outputs found

    Entropic Inference

    Full text link
    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops -- the Maximum Entropy and the Bayesian methods -- into a single general inference scheme.Comment: Presented at MaxEnt 2010, the 30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (July 4-9, 2010, Chamonix, France

    A BAYESIAN ALTERNATIVE TO GENERALIZED CROSS ENTROPY SOLUTIONS FOR UNDERDETERMINED ECONOMETRIC MODELS

    Get PDF
    This paper presents a Bayesian alternative to Generalized Maximum Entropy (GME) and Generalized Cross Entropy (GCE) methods for deriving solutions to econometric models represented by underdetermined systems of equations. For certain types of econometric model specifications, the Bayesian approach provides fully equivalent results to GME-GCE techniques. However, in its general form, the proposed Bayesian methodology allows a more direct and straightforwardly interpretable formulation of available prior information and can reduce significantly the computational effort involved in finding solutions. The technique can be adapted to provide solutions in situations characterized by either informative or uninformative prior information.Underdetermined Equation Systems, Maximum Entropy, Bayesian Priors, Structural Estimation, Calibration, Research Methods/ Statistical Methods, C11, C13, C51,

    Information and Entropy

    Full text link
    What is information? Is it physical? We argue that in a Bayesian theory the notion of information must be defined in terms of its effects on the beliefs of rational agents. Information is whatever constrains rational beliefs and therefore it is the force that induces us to change our minds. This problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), which is designed for updating from arbitrary priors given information in the form of arbitrary constraints, includes as special cases both MaxEnt (which allows arbitrary constraints) and Bayes' rule (which allows arbitrary priors). Thus, ME unifies the two themes of these workshops -- the Maximum Entropy and the Bayesian methods -- into a single general inference scheme that allows us to handle problems that lie beyond the reach of either of the two methods separately. I conclude with a couple of simple illustrative examples.Comment: Presented at MaxEnt 2007, the 27th International Workshop on Bayesian Inference and Maximum Entropy Methods (July 8-13, 2007, Saratoga Springs, New York, USA

    TRUNCATED REGRESSION IN EMPIRICAL ESTIMATION

    Get PDF
    In this paper we illustrate the use of alternative truncated regression estimators for the general linear model. These include variations of maximum likelihood, Bayesian, and maximum entropy estimators in which the error distributions are doubly truncated. To evaluate the performance of the estimators (e.g., efficiency) for a range of sample sizes, Monte Carlo sampling experiments are performed. We then apply each estimator to a factor demand equation for wheat-by-class.doubly truncated samples, Bayesian regression, maximum entropy, wheat-by-class, Research Methods/ Statistical Methods,

    Entropic Priors

    Full text link
    The method of Maximum (relative) Entropy (ME) is used to translate the information contained in the known form of the likelihood into a prior distribution for Bayesian inference. The argument is guided by intuition gained from the successful use of ME methods in statistical mechanics. For experiments that cannot be repeated the resulting "entropic prior" is formally identical with the Einstein fluctuation formula. For repeatable experiments, however, the expected value of the entropy of the likelihood turns out to be relevant information that must be included in the analysis. As an example the entropic prior for a Gaussian likelihood is calculated.Comment: Presented at MaxEnt'03, the 23d International Workshop on Bayesian Inference and Maximum Entropy Methods (August 3-8, 2003, Jackson Hole, WY, USA

    Maximum Entropy and Bayesian Data Analysis: Entropic Priors

    Full text link
    The problem of assigning probability distributions which objectively reflect the prior information available about experiments is one of the major stumbling blocks in the use of Bayesian methods of data analysis. In this paper the method of Maximum (relative) Entropy (ME) is used to translate the information contained in the known form of the likelihood into a prior distribution for Bayesian inference. The argument is inspired and guided by intuition gained from the successful use of ME methods in statistical mechanics. For experiments that cannot be repeated the resulting "entropic prior" is formally identical with the Einstein fluctuation formula. For repeatable experiments, however, the expected value of the entropy of the likelihood turns out to be relevant information that must be included in the analysis. The important case of a Gaussian likelihood is treated in detail.Comment: 23 pages, 2 figure
    • 

    corecore