79,243 research outputs found
Robust Modeling Using Non-Elliptically Contoured Multivariate t Distributions
Models based on multivariate t distributions are widely applied to analyze
data with heavy tails. However, all the marginal distributions of the
multivariate t distributions are restricted to have the same degrees of
freedom, making these models unable to describe different marginal
heavy-tailedness. We generalize the traditional multivariate t distributions to
non-elliptically contoured multivariate t distributions, allowing for different
marginal degrees of freedom. We apply the non-elliptically contoured
multivariate t distributions to three widely-used models: the Heckman selection
model with different degrees of freedom for selection and outcome equations,
the multivariate Robit model with different degrees of freedom for marginal
responses, and the linear mixed-effects model with different degrees of freedom
for random effects and within-subject errors. Based on the Normal mixture
representation of our t distribution, we propose efficient Bayesian inferential
procedures for the model parameters based on data augmentation and parameter
expansion. We show via simulation studies and real examples that the
conclusions are sensitive to the existence of different marginal
heavy-tailedness
Robust Bayesian inference via coarsening
The standard approach to Bayesian inference is based on the assumption that
the distribution of the data belongs to the chosen model class. However, even a
small violation of this assumption can have a large impact on the outcome of a
Bayesian procedure. We introduce a simple, coherent approach to Bayesian
inference that improves robustness to perturbations from the model: rather than
condition on the data exactly, one conditions on a neighborhood of the
empirical distribution. When using neighborhoods based on relative entropy
estimates, the resulting "coarsened" posterior can be approximated by simply
tempering the likelihood---that is, by raising it to a fractional power---thus,
inference is often easily implemented with standard methods, and one can even
obtain analytical solutions when using conjugate priors. Some theoretical
properties are derived, and we illustrate the approach with real and simulated
data, using mixture models, autoregressive models of unknown order, and
variable selection in linear regression
Prior distributions for objective Bayesian analysis
We provide a review of prior distributions for objective Bayesian analysis. We start by examining some foundational issues and then organize our exposition into priors for: i) estimation or prediction; ii) model selection; iii) highdimensional models. With regard to i), we present some basic notions, and then move to more recent contributions on discrete parameter space, hierarchical models, nonparametric models, and penalizing complexity priors. Point ii) is the focus of this paper: it discusses principles for objective Bayesian model comparison, and singles out some major concepts for building priors, which are subsequently illustrated in some detail for the classic problem of variable selection in normal linear models. We also present some recent contributions in the area of objective priors on model space.With regard to point iii) we only provide a short summary of some default priors for high-dimensional models, a rapidly growing area of research
- …