2,103 research outputs found

    Calibration: Respice, Adspice, Prospice

    Get PDF
    “Those who claim for themselves to judge the truth are bound to possess a criterion of truth.” JEL Code: C18, C53, D89calibration, prediction

    Asymptotic Calibration

    Get PDF

    Bayesian Synthesis: Combining subjective analyses, with an application to ozone data

    Full text link
    Bayesian model averaging enables one to combine the disparate predictions of a number of models in a coherent fashion, leading to superior predictive performance. The improvement in performance arises from averaging models that make different predictions. In this work, we tap into perhaps the biggest driver of different predictions---different analysts---in order to gain the full benefits of model averaging. In a standard implementation of our method, several data analysts work independently on portions of a data set, eliciting separate models which are eventually updated and combined through a specific weighting method. We call this modeling procedure Bayesian Synthesis. The methodology helps to alleviate concerns about the sizable gap between the foundational underpinnings of the Bayesian paradigm and the practice of Bayesian statistics. In experimental work we show that human modeling has predictive performance superior to that of many automatic modeling techniques, including AIC, BIC, Smoothing Splines, CART, Bagged CART, Bayes CART, BMA and LARS, and only slightly inferior to that of BART. We also show that Bayesian Synthesis further improves predictive performance. Additionally, we examine the predictive performance of a simple average across analysts, which we dub Convex Synthesis, and find that it also produces an improvement.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS444 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Strategic Manipulation of Empirical Tests

    Get PDF
    Theories can be produced by individuals seeking a good reputation of knowledge. Hence, a significant question is how to test theories anticipating that they might have been produced by (potentially uninformed) experts who prefer their theories not to be rejected. If a theory that predicts exactly like the data generating process is not rejected with high probability then the test is said to not reject the truth. On the other hand, if a false expert, with no knowledge over the data generating process, can strategically select theories that will not be rejected then the test can be ignorantly passed. These tests have limited use because they cannot feasibly dismiss completely uninformed experts. Many tests proposed in the literature (e.g., calibration tests) can be ignorantly passed. Dekel and Feinberg (2006) introduced a class of tests that seemingly have some power of dismissing uninformed experts. We show that some tests from their class can also be ignorantly passed. One of those tests, however, does not reject the truth and cannot be ignorantly passed. Thus, this empirical test can dismiss false experts.We also show that a false reputation of knowledge can be strategically sustained for an arbitrary, but given, number of periods, no matted which test is used (provided that it does not reject the truth). However, false experts can be discredited, even with bounded data sets, if the domain of permissible theories is mildly restricted.

    Reputation in a Model of Monetary Policy with Incomplete Information

    Get PDF
    Previous models of rules versus discretion are extended to include uncertainty about the policymaker's "type." When people observe low inflation, they raise the possibility that the policymaker is committed to low inflation (type 1). This enhancement of reputation gives the uncommitted policymaker (type 2) an incentive to masquerade as the committed type. In the equilibrium the policymaker of type 1 delivers surprisingly low inflation -- with corresponding costs to the economy -- over an extended interval. The type 2 person mimics this outcome for awhile, but shift seventually to high inflation. This high inflation is surprising initially, but subsequently becomes anticipated.

    Strategic Manipulation of Empirical Tests

    Get PDF
    Theories can be produced by experts seeking a reputation for having knowledge. Hence, a tester could anticipate that theories may have been strategically produced by uninformed experts who want to pass an empirical test. We show that, with no restriction on the domain of permissible theories, strategic experts cannot be discredited for an arbitrary but given number of periods, no matter which test is used (provided that the test does not reject the actual data-generating process). Natural ways around this impossibility result include 1) assuming that unbounded data sets are available and 2) restricting the domain of permissible theories (opening the possibility that the actual data-generating process is rejected out of hand). In both cases, it is possible to dismiss strategic experts, but only to a limited extent. These results show significant limits on what data can accomplish when experts produce theories strategically.Testing Strategic Experts
    corecore