532 research outputs found

    Algebraic Bayesian analysis of contingency tables with possibly zero-probability cells

    Full text link
    In this paper we consider a Bayesian analysis of contingency tables allowing for the possibility that cells may have probability zero. In this sense we depart from standard log-linear modeling that implicitly assumes a positivity constraint. Our approach leads us to consider mixture models for contingency tables, where the components of the mixture, which we call model-instances, have distinct support. We rely on ideas from polynomial algebra in order to identify the various model instances. We also provide a method to assign prior probabilities to each instance of the model, as well as describing methods for constructing priors on the parameter space of each instance. We illustrate our methodology through a 5×25 \times 2 table involving two structural zeros, as well as a zero count. The results we obtain show that our analysis may lead to conclusions that are substantively different from those that would obtain in a standard framework, wherein the possibility of zero-probability cells is not explicitly accounted for

    Bayesian model comparison based on expected posterior priors for discrete decomposable graphical models

    Get PDF
    The implementation of the Bayesian paradigm to model comparison can be problematic. In particular, prior distributions on the parameter space of each candidate model require special care. While it is well known that improper priors cannot be used routinely for Bayesian model comparison, we claim that in general the use of conventional priors (proper or improper) for model comparison should be regarded as suspicious, especially when comparing models having different dimensions. The basic idea is that priors should not be assigned separately under each model; rather they should be related across models, in order to acquire some degree of compatibility, and thus allow fairer and more robust comparisons. In this connection, the Expected Posterior Prior (EPP) methodology represents a useful tool. In this paper we develop a procedure based on EPP to perform Bayesian model comparison for discrete undirected decomposable graphical models, although our method could be adapted to deal also with Directed Acyclic Graph models. We present two possible approaches. One, based on imaginary data, requires to single-out a base-model, is conceptually appealing and is also attractive for the communication of results in terms of plausible ranges for posterior quantities of interest. The second approach makes use of training samples from the actual data for constructing the EPP. It is universally applicable, but has limited flexibility due to its inherent double-use of the data. The methodology is illustrated through the analysis of a 2 × 3 × 4 contingency table.Bayes factor; Clique; Conjugate family; Contingency table; Decomposable model; Imaginary data; Importance sampling; Robustness; Training sample.

    Moment Priors for Bayesian Model Choice with Applications to Directed Acyclic Graphs

    Get PDF
    We propose a new method for the objective comparison of two nested models based on non-local priors. More specifically, starting with a default prior under each of the two models, we construct a moment prior under the larger model, and then use the fractional Bayes factor for a comparison. Non-local priors have been recently introduced to obtain a better separation between nested models, thus accelerating the learning behaviour, relative to currently used local priors, when the smaller model holds. Although the argument showing the superior performance of non-local priors is asymptotic, the improvement they produce is already apparent for small to moderate samples sizes, which makes them a useful and practical tool. As a by-product, it turns out that routinely used objective methods, such as ordinary fractional Bayes factors, are alarmingly slow in learning that the smaller model holds. On the downside, when the larger model holds, non-local priors exhibit a weaker discriminatory power against sampling distributions close to the smaller model. However, this drawback becomes rapidly negligible as the sample size grows, because the learning rate of the Bayes factor under the larger model is exponentially fast, whether one uses local or non-local priors. We apply our methodology to directed acyclic graph models having a Gaussian distribution. Because of the recursive nature of the joint density, and the assumption of global parameter independence embodied in our prior, calculations need only be performed for individual vertices admitting a distinct parent structure under the two graphs; additionally we obtain closed-form expressions as in the ordinary conjugate case. We provide illustrations of our method for a simple three-variable case, as well as for a more elaborate seven-variable situation. Although we concentrate on pairwise comparisons of nested models, our procedure can be implemented to carry-out a search over the space of all models.Fractional Bayes factor; Gaussian graphical model; Non-local prior; Objective Bayes network; Stochastic search; Structural learning.

    Objective Bayes Factors for Gaussian Directed Acyclic Graphical Models

    Get PDF
    We propose an objective Bayesian method for the comparison of all Gaussian directed acyclic graphical models defined on a given set of variables. The method, which is based on the notion of fractional Bayes factor, requires a single default (typically improper) prior on the space of unconstrained covariance matrices, together with a prior sample size hyper-parameter, which can be set to its minimal value. We show that our approach produces genuine Bayes factors. The implied prior on the concentration matrix of any complete graph is a data-dependent Wishart distribution, and this in turn guarantees that Markov equivalent graphs are scored with the same marginal likelihood. We specialize our results to the smaller class of Gaussian decomposable undirected graphical models, and show that in this case they coincide with those recently obtained using limiting versions of hyper-inverse Wishart distributions as priors on the graph-constrained covariance matrices.Bayes factor; Bayesian model selection; Directed acyclic graph; Exponential family; Fractional Bayes factor; Gaussian graphical model; Objective Bayes;Standard conjugate prior; Structural learning. network; Stochastic search; Structural learning.

    Testing Hardy-Weinberg Equilibrium: an Objective Bayesian Analysis

    Get PDF
    We analyze the general (multiallelic) Hardy-Weinberg equilibrium problem from an objective Bayesian testing standpoint. We argue that for small or moderate sample sizes the answer is rather sensitive to the prior chosen, and this suggests to carry out a sensitivity analysis with respect to the prior. This goal is achieved through the identification of a class of priors specifically designed for this testing problem. In this paper we consider the class of intrinsic priors under the full model, indexed by a tuning quantity, the training sample size. These priors are objective, satisfy Savage’s continuity condition and have proved to behave extremely well for many statistical testing problems. We compute the posterior probability of the Hardy-Weinberg equilibrium model for the class of intrinsic priors, assess robustness over the range of plausible answers, as well as stability of the decision in favor of either hypothesis.Bayes factor; Hardy-Weinberg equilibrium; Intrinsic prior; Model posterior probability; Robustness.
    corecore