28 research outputs found

    Gaussian approximation of Gaussian scale mixture

    Get PDF
    For a given positive random variable V>0V>0 and a given Z∼N(0,1)Z\sim N(0,1) independent of VV, we compute the scalar t0t_0 such that the distance between ZVZ\sqrt{V} and Zt0Z\sqrt{t_0} in the L2(R)L^2(\R) sense, is minimal. We also consider the same problem in several dimensions when VV is a random positive definite matrix.Comment: 13 page

    Wishart distributions for decomposable graphs

    Full text link
    When considering a graphical Gaussian model NG{\mathcal{N}}_G Markov with respect to a decomposable graph GG, the parameter space of interest for the precision parameter is the cone PGP_G of positive definite matrices with fixed zeros corresponding to the missing edges of GG. The parameter space for the scale parameter of NG{\mathcal{N}}_G is the cone QGQ_G, dual to PGP_G, of incomplete matrices with submatrices corresponding to the cliques of GG being positive definite. In this paper we construct on the cones QGQ_G and PGP_G two families of Wishart distributions, namely the Type I and Type II Wisharts. They can be viewed as generalizations of the hyper Wishart and the inverse of the hyper inverse Wishart as defined by Dawid and Lauritzen [Ann. Statist. 21 (1993) 1272--1317]. We show that the Type I and II Wisharts have properties similar to those of the hyper and hyper inverse Wishart. Indeed, the inverse of the Type II Wishart forms a conjugate family of priors for the covariance parameter of the graphical Gaussian model and is strong directed hyper Markov for every direction given to the graph by a perfect order of its cliques, while the Type I Wishart is weak hyper Markov. Moreover, the inverse Type II Wishart as a conjugate family presents the advantage of having a multidimensional shape parameter, thus offering flexibility for the choice of a prior.Comment: Published at http://dx.doi.org/10.1214/009053606000001235 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A conjugate prior for discrete hierarchical log-linear models

    Full text link
    In Bayesian analysis of multi-way contingency tables, the selection of a prior distribution for either the log-linear parameters or the cell probabilities parameters is a major challenge. In this paper, we define a flexible family of conjugate priors for the wide class of discrete hierarchical log-linear models, which includes the class of graphical models. These priors are defined as the Diaconis--Ylvisaker conjugate priors on the log-linear parameters subject to "baseline constraints" under multinomial sampling. We also derive the induced prior on the cell probabilities and show that the induced prior is a generalization of the hyper Dirichlet prior. We show that this prior has several desirable properties and illustrate its usefulness by identifying the most probable decomposable, graphical and hierarchical log-linear models for a six-way contingency table.Comment: Published in at http://dx.doi.org/10.1214/08-AOS669 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Moments of minors of Wishart matrices

    Full text link
    For a random matrix following a Wishart distribution, we derive formulas for the expectation and the covariance matrix of compound matrices. The compound matrix of order mm is populated by all m×mm\times m-minors of the Wishart matrix. Our results yield first and second moments of the minors of the sample covariance matrix for multivariate normal observations. This work is motivated by the fact that such minors arise in the expression of constraints on the covariance matrix in many classical multivariate problems.Comment: Published in at http://dx.doi.org/10.1214/07-AOS522 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Flexible covariance estimation in graphical Gaussian models

    Full text link
    In this paper, we propose a class of Bayes estimators for the covariance matrix of graphical Gaussian models Markov with respect to a decomposable graph GG. Working with the WPGW_{P_G} family defined by Letac and Massam [Ann. Statist. 35 (2007) 1278--1323] we derive closed-form expressions for Bayes estimators under the entropy and squared-error losses. The WPGW_{P_G} family includes the classical inverse of the hyper inverse Wishart but has many more shape parameters, thus allowing for flexibility in differentially shrinking various parts of the covariance matrix. Moreover, using this family avoids recourse to MCMC, often infeasible in high-dimensional problems. We illustrate the performance of our estimators through a collection of numerical examples where we explore frequentist risk properties and the efficacy of graphs in the estimation of high-dimensional covariance structures.Comment: Published in at http://dx.doi.org/10.1214/08-AOS619 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    An Expectation Formula for the Multivariate Dirichlet Distribution

    Get PDF
    AbstractSuppose that the random vector (X1, …, Xq) follows a Dirichlet distribution on Rq+ with parameter (p1, …, pq)∈Rq+. For f1, …, fq>0, it is well-known that E(f1X1+…+fqXq)−(p1+…+pq)=f−p11…f−pqq. In this paper, we generalize this expectation formula to the singular and non-singular multivariate Dirichlet distributions as follows. Let Ωr denote the cone of all r×r positive-definite real symmetric matrices. For x∈Ωr and 1⩽j⩽r, let detjx denote the jth principal minor of x. For s=(s1, …, sr)∈Rr, the generalized power function of x∈Ωr is the function Δs(x)=(det1x)s1−s2(det2x)s2−s3…(detr−1x)sr−1−sr(detrx)sr; further, for any t∈R, we denote by s+t the vector (s1+t, …, sr+t). Suppose X1, …, Xq∈Ωr are random matrices such that (X1, …, Xq) follows a multivariate Dirichlet distribution with parameters p1, …, pq. Then we evaluate the expectation E[Δs1(X1)…Δsq(Xq)Δs1+…+sq+p((a+f1X1+…+fqXq)−1)], where a∈Ωr, p=p1+…+pq, f1, …, fq>0, and s1, …, sq each belong to an appropriate subset of Rr+. The result obtained is parallel to that given above for the univariate case, and remains valid even if some of the Xj's are singular. Our derivation utilizes the framework of symmetric cones, so that our results are valid for multivariate Dirichlet distributions on all symmetric cones

    Model selection in the space of Gaussian models invariant by symmetry

    Full text link
    We consider multivariate centred Gaussian models for the random variable Z=(Z1,…,Zp)Z=(Z_1,\ldots, Z_p), invariant under the action of a subgroup of the group of permutations on {1,…,p}\{1,\ldots, p\}. Using the representation theory of the symmetric group on the field of reals, we derive the distribution of the maximum likelihood estimate of the covariance parameter Σ\Sigma and also the analytic expression of the normalizing constant of the Diaconis-Ylvisaker conjugate prior for the precision parameter K=Σ−1K=\Sigma^{-1}. We can thus perform Bayesian model selection in the class of complete Gaussian models invariant by the action of a subgroup of the symmetric group, which we could also call complete RCOP models. We illustrate our results with a toy example of dimension 44 and several examples for selection within cyclic groups, including a high dimensional example with p=100p=100.Comment: 34 pages, 4 figures, 5 table
    corecore