21 research outputs found
Robust Bayesian Regression with Synthetic Posterior
Although linear regression models are fundamental tools in statistical
science, the estimation results can be sensitive to outliers. While several
robust methods have been proposed in frequentist frameworks, statistical
inference is not necessarily straightforward. We here propose a Bayesian
approach to robust inference on linear regression models using synthetic
posterior distributions based on -divergence, which enables us to
naturally assess the uncertainty of the estimation through the posterior
distribution. We also consider the use of shrinkage priors for the regression
coefficients to carry out robust Bayesian variable selection and estimation
simultaneously. We develop an efficient posterior computation algorithm by
adopting the Bayesian bootstrap within Gibbs sampling. The performance of the
proposed method is illustrated through simulation studies and applications to
famous datasets.Comment: 23 pages, 5 figure
On default priors for robust Bayesian estimation with divergences
This paper presents objective priors for robust Bayesian estimation against
outliers based on divergences. The minimum -divergence estimator is
well-known to work well estimation against heavy contamination. The robust
Bayesian methods by using quasi-posterior distributions based on divergences
have been also proposed in recent years. In objective Bayesian framework, the
selection of default prior distributions under such quasi-posterior
distributions is an important problem. In this study, we provide some
properties of reference and moment matching priors under the quasi-posterior
distribution based on the -divergence. In particular, we show that the
proposed priors are approximately robust under the condition on the
contamination distribution without assuming any conditions on the contamination
ratio. Some simulation studies are also presented.Comment: 22page
Adaptation of the Tuning Parameter in General Bayesian Inference with Robust Divergence
We introduce a methodology for robust Bayesian estimation with robust
divergence (e.g., density power divergence or {\gamma}-divergence), indexed by
a single tuning parameter. It is well known that the posterior density induced
by robust divergence gives highly robust estimators against outliers if the
tuning parameter is appropriately and carefully chosen. In a Bayesian
framework, one way to find the optimal tuning parameter would be using evidence
(marginal likelihood). However, we numerically illustrate that evidence induced
by the density power divergence does not work to select the optimal tuning
parameter since robust divergence is not regarded as a statistical model. To
overcome the problems, we treat the exponential of robust divergence as an
unnormalized statistical model, and we estimate the tuning parameter via
minimizing the Hyvarinen score. We also provide adaptive computational methods
based on sequential Monte Carlo (SMC) samplers, which enables us to obtain the
optimal tuning parameter and samples from posterior distributions
simultaneously. The empirical performance of the proposed method through
simulations and an application to real data are also provided