349 research outputs found

    Comment: Lancaster Probabilities and Gibbs Sampling

    Full text link
    Comment on ``Lancaster Probabilities and Gibbs Sampling'' [arXiv:0808.3852]Comment: Published in at http://dx.doi.org/10.1214/08-STS252A the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bayes factors and the geometry of discrete hierarchical loglinear models

    Full text link
    A standard tool for model selection in a Bayesian framework is the Bayes factor which compares the marginal likelihood of the data under two given different models. In this paper, we consider the class of hierarchical loglinear models for discrete data given under the form of a contingency table with multinomial sampling. We assume that the Diaconis-Ylvisaker conjugate prior is the prior distribution on the loglinear parameters and the uniform is the prior distribution on the space of models. Under these conditions, the Bayes factor between two models is a function of their prior and posterior normalizing constants. These constants are functions of the hyperparameters (m,α)(m,\alpha) which can be interpreted respectively as marginal counts and the total count of a fictive contingency table. We study the behaviour of the Bayes factor when α\alpha tends to zero. In this study two mathematical objects play a most important role. They are, first, the interior CC of the convex hull Cˉ\bar{C} of the support of the multinomial distribution for a given hierarchical loglinear model together with its faces and second, the characteristic function JC\mathbb{J}_C of this convex set CC. We show that, when α\alpha tends to 0, if the data lies on a face FiF_i of Ciˉ,i=1,2\bar{C_i},i=1,2 of dimension kik_i, the Bayes factor behaves like αk1−k2\alpha^{k_1-k_2}. This implies in particular that when the data is in C1C_1 and in C2C_2, i.e. when kik_i equals the dimension of model JiJ_i, the sparser model is favored, thus confirming the idea of Bayesian regularization.Comment: 37 page

    Dirichlet random walks

    Get PDF
    This article provides tools for the study of the Dirichlet random walk in Rd\mathbb{R}^d. By this we mean the random variable W=X1Θ1+⋯+XnΘnW=X_1\Theta_1+\cdots+X_n\Theta_n where X=(X1,…,Xn)∼D(q1,…,qn)X=(X_1,\ldots,X_n) \sim \mathcal{D}(q_1,\ldots,q_n) is Dirichlet distributed and where Θ1,…Θn\Theta_1,\ldots \Theta_n are iid, uniformly distributed on the unit sphere of Rd\mathbb{R}^d and independent of X.X. In particular we compute explicitely in a number of cases the distribution of W.W. Some of our results appear already in the literature, in particular in the papers by G\'erard Le Ca\"{e}r (2010, 2011). In these cases, our proofs are much simpler from the original ones, since we use a kind of Stieltjes transform of WW instead of the Laplace transform: as a consequence the hypergeometric functions replace the Bessel functions. A crucial ingredient is a particular case of the classical and non trivial identity, true for 0≤u≤1/20\leq u\leq 1/2:2F1(2a,2b;a+b+12;u)=_2F1(a,b;a+b+12;4u−4u2)._2F_1(2a,2b;a+b+\frac{1}{2};u)= \_2F_1(a,b;a+b+\frac{1}{2};4u-4u^2). We extend these results to a study of the limits of the Dirichlet random walks when the number of added terms goes to infinity, interpreting the results in terms of an integral by a Dirichlet process. We introduce the ideas of Dirichlet semigroups and of Dirichlet infinite divisibility and characterize these infinite divisible distributions in the sense of Dirichlet when they are concentrated on the unit ball of Rd.\mathbb{R}^d. {4mm}\noindent \textsc{Keywords:} Dirichlet processes, Stieltjes transforms, random flight, distributions in a ball, hyperuniformity, infinite divisibility in the sense of Dirichlet. {4mm}\noindent \textsc{AMS classification}: 60D99, 60F99

    Wishart distributions for decomposable graphs

    Full text link
    When considering a graphical Gaussian model NG{\mathcal{N}}_G Markov with respect to a decomposable graph GG, the parameter space of interest for the precision parameter is the cone PGP_G of positive definite matrices with fixed zeros corresponding to the missing edges of GG. The parameter space for the scale parameter of NG{\mathcal{N}}_G is the cone QGQ_G, dual to PGP_G, of incomplete matrices with submatrices corresponding to the cliques of GG being positive definite. In this paper we construct on the cones QGQ_G and PGP_G two families of Wishart distributions, namely the Type I and Type II Wisharts. They can be viewed as generalizations of the hyper Wishart and the inverse of the hyper inverse Wishart as defined by Dawid and Lauritzen [Ann. Statist. 21 (1993) 1272--1317]. We show that the Type I and II Wisharts have properties similar to those of the hyper and hyper inverse Wishart. Indeed, the inverse of the Type II Wishart forms a conjugate family of priors for the covariance parameter of the graphical Gaussian model and is strong directed hyper Markov for every direction given to the graph by a perfect order of its cliques, while the Type I Wishart is weak hyper Markov. Moreover, the inverse Type II Wishart as a conjugate family presents the advantage of having a multidimensional shape parameter, thus offering flexibility for the choice of a prior.Comment: Published at http://dx.doi.org/10.1214/009053606000001235 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Gaussian approximation of Gaussian scale mixture

    Get PDF
    For a given positive random variable V>0V>0 and a given Z∼N(0,1)Z\sim N(0,1) independent of VV, we compute the scalar t0t_0 such that the distance between ZVZ\sqrt{V} and Zt0Z\sqrt{t_0} in the L2(R)L^2(\R) sense, is minimal. We also consider the same problem in several dimensions when VV is a random positive definite matrix.Comment: 13 page
    • …
    corecore