3,799 research outputs found
Mixtures of g-priors in Generalized Linear Models
Mixtures of Zellner's g-priors have been studied extensively in linear models
and have been shown to have numerous desirable properties for Bayesian variable
selection and model averaging. Several extensions of g-priors to Generalized
Linear Models (GLMs) have been proposed in the literature; however, the choice
of prior distribution of g and resulting properties for inference have received
considerably less attention. In this paper, we unify mixtures of g-priors in
GLMs by assigning the truncated Compound Confluent Hypergeometric (tCCH)
distribution to 1/(1 + g), which encompasses as special cases several mixtures
of g-priors in the literature, such as the hyper-g, Beta-prime, truncated
Gamma, incomplete inverse-Gamma, benchmark, robust, hyper-g/n, and intrinsic
priors. Through an integrated Laplace approximation, the posterior distribution
of 1/(1 + g) is in turn a tCCH distribution, and approximate marginal
likelihoods are thus available analytically, leading to "Compound
Hypergeometric Information Criteria" for model selection. We discuss the local
geometric properties of the g-prior in GLMs and show how the desiderata for
model selection proposed by Bayarri et al, such as asymptotic model selection
consistency, intrinsic consistency, and measurement invariance may be used to
justify the prior and specific choices of the hyper parameters. We illustrate
inference using these priors and contrast them to other approaches via
simulation and real data examples. The methodology is implemented in the R
package BAS and freely available on CRAN
Meta-analysis of functional neuroimaging data using Bayesian nonparametric binary regression
In this work we perform a meta-analysis of neuroimaging data, consisting of
locations of peak activations identified in 162 separate studies on emotion.
Neuroimaging meta-analyses are typically performed using kernel-based methods.
However, these methods require the width of the kernel to be set a priori and
to be constant across the brain. To address these issues, we propose a fully
Bayesian nonparametric binary regression method to perform neuroimaging
meta-analyses. In our method, each location (or voxel) has a probability of
being a peak activation, and the corresponding probability function is based on
a spatially adaptive Gaussian Markov random field (GMRF). We also include
parameters in the model to robustify the procedure against miscoding of the
voxel response. Posterior inference is implemented using efficient MCMC
algorithms extended from those introduced in Holmes and Held [Bayesian Anal. 1
(2006) 145--168]. Our method allows the probability function to be locally
adaptive with respect to the covariates, that is, to be smooth in one region of
the covariate space and wiggly or even discontinuous in another. Posterior
miscoding probabilities for each of the identified voxels can also be obtained,
identifying voxels that may have been falsely classified as being activated.
Simulation studies and application to the emotion neuroimaging data indicate
that our method is superior to standard kernel-based methods.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS523 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Testing hypotheses via a mixture estimation model
We consider a novel paradigm for Bayesian testing of hypotheses and Bayesian
model comparison. Our alternative to the traditional construction of posterior
probabilities that a given hypothesis is true or that the data originates from
a specific model is to consider the models under comparison as components of a
mixture model. We therefore replace the original testing problem with an
estimation one that focus on the probability weight of a given model within a
mixture model. We analyze the sensitivity on the resulting posterior
distribution on the weights of various prior modeling on the weights. We stress
that a major appeal in using this novel perspective is that generic improper
priors are acceptable, while not putting convergence in jeopardy. Among other
features, this allows for a resolution of the Lindley-Jeffreys paradox. When
using a reference Beta B(a,a) prior on the mixture weights, we note that the
sensitivity of the posterior estimations of the weights to the choice of a
vanishes with the sample size increasing and avocate the default choice a=0.5,
derived from Rousseau and Mengersen (2011). Another feature of this easily
implemented alternative to the classical Bayesian solution is that the speeds
of convergence of the posterior mean of the weight and of the corresponding
posterior probability are quite similar.Comment: 25 pages, 6 figures, 2 table
- …