121,640 research outputs found

    On predictive probability matching priors

    Full text link
    We revisit the question of priors that achieve approximate matching of Bayesian and frequentist predictive probabilities. Such priors may be thought of as providing frequentist calibration of Bayesian prediction or simply as devices for producing frequentist prediction regions. Here we analyse the O(n1)O(n^{-1}) term in the expansion of the coverage probability of a Bayesian prediction region, as derived in [Ann. Statist. 28 (2000) 1414--1426]. Unlike the situation for parametric matching, asymptotic predictive matching priors may depend on the level α\alpha. We investigate uniformly predictive matching priors (UPMPs); that is, priors for which this O(n1)O(n^{-1}) term is zero for all α\alpha. It was shown in [Ann. Statist. 28 (2000) 1414--1426] that, in the case of quantile matching and a scalar parameter, if such a prior exists then it must be Jeffreys' prior. In the present article we investigate UPMPs in the multiparameter case and present some general results about the form, and uniqueness or otherwise, of UPMPs for both quantile and highest predictive density matching.Comment: Published in at http://dx.doi.org/10.1214/074921708000000048 the IMS Collections (http://www.imstat.org/publications/imscollections.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Tight Bounds for the Price of Anarchy of Simultaneous First Price Auctions

    Get PDF
    We study the Price of Anarchy of simultaneous first-price auctions for buyers with submodular and subadditive valuations. The current best upper bounds for the Bayesian Price of Anarchy of these auctions are e/(e-1) [Syrgkanis and Tardos 2013] and 2 [Feldman et al. 2013], respectively. We provide matching lower bounds for both cases even for the case of full information and for mixed Nash equilibria via an explicit construction. We present an alternative proof of the upper bound of e/(e-1) for first-price auctions with fractionally subadditive valuations which reveals the worst-case price distribution, that is used as a building block for the matching lower bound construction. We generalize our results to a general class of item bidding auctions that we call bid-dependent auctions (including first-price auctions and all-pay auctions) where the winner is always the highest bidder and each bidder's payment depends only on his own bid. Finally, we apply our techniques to discriminatory price multi-unit auctions. We complement the results of [de Keijzer et al. 2013] for the case of subadditive valuations, by providing a matching lower bound of 2. For the case of submodular valuations, we provide a lower bound of 1.109. For the same class of valuations, we were able to reproduce the upper bound of e/(e-1) using our non-smooth approach.Comment: 37 pages, 5 figures, ACM Transactions on Economics and Computatio

    Probability-Matching Predictors for Extreme Extremes

    Full text link
    A location- and scale-invariant predictor is constructed which exhibits good probability matching for extreme predictions outside the span of data drawn from a variety of (stationary) general distributions. It is constructed via the three-parameter {\mu, \sigma, \xi} Generalized Pareto Distribution (GPD). The predictor is designed to provide matching probability exactly for the GPD in both the extreme heavy-tailed limit and the extreme bounded-tail limit, whilst giving a good approximation to probability matching at all intermediate values of the tail parameter \xi. The predictor is valid even for small sample sizes N, even as small as N = 3. The main purpose of this paper is to present the somewhat lengthy derivations which draw heavily on the theory of hypergeometric functions, particularly the Lauricella functions. Whilst the construction is inspired by the Bayesian approach to the prediction problem, it considers the case of vague prior information about both parameters and model, and all derivations are undertaken using sampling theory.Comment: 22 pages, 7 figure

    Relational Entropic Dynamics of Particles

    Full text link
    The general framework of entropic dynamics is used to formulate a relational quantum dynamics. The main new idea is to use tools of information geometry to develop an entropic measure of the mismatch between successive configurations of a system. This leads to an entropic version of the classical best matching technique developed by J. Barbour and collaborators. The procedure is illustrated in the simple case of a system of N particles with global translational symmetry. The generalization to other symmetries whether global (rotational invariance) or local (gauge invariance) is straightforward. The entropic best matching allows a quantum implementation Mach's principles of spatial and temporal relationalism and provides the foundation for a method of handling gauge theories in an informational framework.Comment: Presented at MaxEnt 2015, the 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (July 19--24, 2015, Potsdam NY, USA

    Latent Bayesian melding for integrating individual and population models

    Get PDF
    In many statistical problems, a more coarse-grained model may be suitable for population-level behaviour, whereas a more detailed model is appropriate for accurate modelling of individual behaviour. This raises the question of how to integrate both types of models. Methods such as posterior regularization follow the idea of generalized moment matching, in that they allow matching expectations between two models, but sometimes both models are most conveniently expressed as latent variable models. We propose latent Bayesian melding, which is motivated by averaging the distributions over populations statistics of both the individual-level and the population-level models under a logarithmic opinion pool framework. ln a case study on electricity disaggregation, which is a type of single channel blind source separation problem, we show that latent Bayesian melding leads to significantly more accurate predictions than an approach based solely on generalized moment matching

    Approximate Bayesian Computation in State Space Models

    Full text link
    A new approach to inference in state space models is proposed, based on approximate Bayesian computation (ABC). ABC avoids evaluation of the likelihood function by matching observed summary statistics with statistics computed from data simulated from the true process; exact inference being feasible only if the statistics are sufficient. With finite sample sufficiency unattainable in the state space setting, we seek asymptotic sufficiency via the maximum likelihood estimator (MLE) of the parameters of an auxiliary model. We prove that this auxiliary model-based approach achieves Bayesian consistency, and that - in a precise limiting sense - the proximity to (asymptotic) sufficiency yielded by the MLE is replicated by the score. In multiple parameter settings a separate treatment of scalar parameters, based on integrated likelihood techniques, is advocated as a way of avoiding the curse of dimensionality. Some attention is given to a structure in which the state variable is driven by a continuous time process, with exact inference typically infeasible in this case as a result of intractable transitions. The ABC method is demonstrated using the unscented Kalman filter as a fast and simple way of producing an approximation in this setting, with a stochastic volatility model for financial returns used for illustration
    corecore