9,690 research outputs found

    Visualization of Conjugate Distributions in Latent Dirichlet Allocation Model

    Get PDF
    In Bayesian probability theory, if the posterior distributions p(θx) are in the same family as the prior probability distribution p(θ), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function. In the Latent Dirichlet Allocation model, the likelihood function is Multinomial and the prior function is Dirichlet. There the Dirichlet distribution is a conjugate prior and then the posterior function becomes also Dirichlet. The posterior function is a parameter mixture distribution where the parameter of the likelihood function is distributed according to the given Dirichlet distribution. The compound probability distribution is, however, complicated to understand and have the image. To make many persons understand the image intuitively, the paper visualizes the parameter mixture distribution

    A conjugate prior for discrete hierarchical log-linear models

    Full text link
    In Bayesian analysis of multi-way contingency tables, the selection of a prior distribution for either the log-linear parameters or the cell probabilities parameters is a major challenge. In this paper, we define a flexible family of conjugate priors for the wide class of discrete hierarchical log-linear models, which includes the class of graphical models. These priors are defined as the Diaconis--Ylvisaker conjugate priors on the log-linear parameters subject to "baseline constraints" under multinomial sampling. We also derive the induced prior on the cell probabilities and show that the induced prior is a generalization of the hyper Dirichlet prior. We show that this prior has several desirable properties and illustrate its usefulness by identifying the most probable decomposable, graphical and hierarchical log-linear models for a six-way contingency table.Comment: Published in at http://dx.doi.org/10.1214/08-AOS669 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bayesian nonparametric modeling of latent partitions via Stirling-gamma priors

    Full text link
    Dirichlet process mixtures are particularly sensitive to the value of the so-called precision parameter, which controls the behavior of the underlying latent partition. Randomization of the precision through a prior distribution is a common solution, which leads to more robust inferential procedures. However, existing prior choices do not allow for transparent elicitation, due to the lack of analytical results. We introduce and investigate a novel prior for the Dirichlet process precision, the Stirling-gamma distribution. We study the distributional properties of the induced random partition, with an emphasis on the number of clusters. Our theoretical investigation clarifies the reasons of the improved robustness properties of the proposed prior. Moreover, we show that, under specific choices of its hyperparameters, the Stirling-gamma distribution is conjugate to the random partition of a Dirichlet process. We illustrate with an ecological application the usefulness of our approach for the detection of communities of ant workers

    Fast Predictive Uncertainty for Classification with Bayesian Deep Networks

    Full text link
    In Bayesian Deep Learning, distributions over the output of classification neural networks are approximated by first constructing a Gaussian distribution over the weights, then sampling from it to receive a distribution over the categorical output distribution. This is costly. We reconsider old work to construct a Dirichlet approximation of this output distribution, which yields an analytic map between Gaussian distributions in logit space and Dirichlet distributions (the conjugate prior to the categorical) in the output space. We argue that the resulting Dirichlet distribution has theoretical and practical advantages, in particular more efficient computation of the uncertainty estimate, scaling to large datasets and networks like ImageNet and DenseNet. We demonstrate the use of this Dirichlet approximation by using it to construct a lightweight uncertainty-aware output ranking for the ImageNet setup

    Thompson sampling based Monte-Carlo planning in POMDPs

    No full text
    Monte-Carlo tree search (MCTS) has been drawinggreat interest in recent years for planning under uncertainty. One of the key challenges is the tradeoffbetween exploration and exploitation. To addressthis, we introduce a novel online planning algorithmfor large POMDPs using Thompson sampling basedMCTS that balances between cumulative and simple regrets.The proposed algorithm — Dirichlet-Dirichlet-NormalGamma based Partially Observable Monte-Carlo Planning (D2NG-POMCP) — treats the accumulatedreward of performing an action from a beliefstate in the MCTS search tree as a random variable followingan unknown distribution with hidden parameters.Bayesian method is used to model and infer theposterior distribution of these parameters by choosingthe conjugate prior in the form of a combination of twoDirichlet and one NormalGamma distributions. Thompsonsampling is exploited to guide the action selection inthe search tree. Experimental results confirmed that ouralgorithm outperforms the state-of-the-art approacheson several common benchmark problems
    corecore