461,967 research outputs found

    A categorical foundation for Bayesian probability

    Full text link
    Given two measurable spaces HH and DD with countably generated σ\sigma-algebras, a perfect prior probability measure PHP_H on HH and a sampling distribution S:HDS: H \rightarrow D, there is a corresponding inference map I:DHI: D \rightarrow H which is unique up to a set of measure zero. Thus, given a data measurement μ:1D\mu: 1 \rightarrow D, a posterior probability PH^=Iμ\widehat{P_H}= I \circ \mu can be computed. This procedure is iterative: with each updated probability PHP_H, we obtain a new joint distribution which in turn yields a new inference map II and the process repeats with each additional measurement. The main result uses an existence theorem for regular conditional probabilities by Faden, which holds in more generality than the setting of Polish spaces. This less stringent setting then allows for non-trivial decision rules (Eilenberg--Moore algebras) on finite (as well as non finite) spaces, and also provides for a common framework for decision theory and Bayesian probability.Comment: 15 pages; revised setting to more clearly explain how to incorporate perfect measures and the Giry monad; to appear in Applied Categorical Structure

    A probabilistic approach to quantum Bayesian games of incomplete information

    Full text link
    A Bayesian game is a game of incomplete information in which the rules of the game are not fully known to all players. We consider the Bayesian game of Battle of Sexes that has several Bayesian Nash equilibria and investigate its outcome when the underlying probability set is obtained from generalized Einstein-Podolsky-Rosen experiments. We find that this probability set, which may become non-factorizable, results in a unique Bayesian Nash equilibrium of the game.Comment: 18 pages, 2 figures, accepted for publication in Quantum Information Processin

    Bayesian optimization for computationally extensive probability distributions

    Full text link
    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.Comment: 13 pages, 5 figure

    Consistency of Bayesian Linear Model Selection With a Growing Number of Parameters

    Full text link
    Linear models with a growing number of parameters have been widely used in modern statistics. One important problem about this kind of model is the variable selection issue. Bayesian approaches, which provide a stochastic search of informative variables, have gained popularity. In this paper, we will study the asymptotic properties related to Bayesian model selection when the model dimension pp is growing with the sample size nn. We consider pnp\le n and provide sufficient conditions under which: (1) with large probability, the posterior probability of the true model (from which samples are drawn) uniformly dominates the posterior probability of any incorrect models; and (2) with large probability, the posterior probability of the true model converges to one. Both (1) and (2) guarantee that the true model will be selected under a Bayesian framework. We also demonstrate several situations when (1) holds but (2) fails, which illustrates the difference between these two properties. Simulated examples are provided to illustrate the main results

    The Bayesian sampler : generic Bayesian inference causes incoherence in human probability

    Get PDF
    Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of “noise” in an important recent model of probability judgment, the probability theory plus noise model (Costello & Watts, 2014, 2016a, 2017; Costello & Watts, 2019; Costello, Watts, & Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities and distributions of probability estimates. We show in 2 new experiments that this model better captures these mean judgments both qualitatively and quantitatively; which model best fits individual distributions of responses depends on the assumed size of the cognitive sample

    Opinion Pooling under Asymmetric Information

    Get PDF
    If each member of a group assigns a certain probability to a hypothesis, what probability should the collective as a whole assign? More generally, how should individual probability functions be merged into a single collective one? I investigate this question in case that the individual probability functions are based on different information sets. Under suitable assumptions, I present a simple solution to this aggregation problem, and a more complex solution that can cope with any overlaps between different persons' information sets. The solutions are derived from an axiomatic system that models the individuals as well as the collective as Bayesian rational agents. Two notable features are that the solutions may be parameter-free, and that they incorporate each individual's information although the individuals need not communicate their (perhaps very complex) information, but rather reveal only the resulting probabilities.opinion pooling, probability aggregation, decision theory, social choice theory, Bayesian rationality, Bayesian aggregation, information
    corecore