649 research outputs found
Fully Bayesian Penalized Regression with a Generalized Bridge Prior
We consider penalized regression models under a unified framework. The
particular method is determined by the form of the penalty term, which is
typically chosen by cross validation. We introduce a fully Bayesian approach
that incorporates both sparse and dense settings and show how to use a type of
model averaging approach to eliminate the nuisance penalty parameters and
perform inference through the marginal posterior distribution of the regression
coefficients. We establish tail robustness of the resulting estimator as well
as conditional and marginal posterior consistency for the Bayesian model. We
develop a component-wise Markov chain Monte Carlo algorithm for sampling.
Numerical results show that the method tends to select the optimal penalty and
performs well in both variable selection and prediction and is comparable to,
and often better than alternative methods. Both simulated and real data
examples are provided
Sufficient burn-in for Gibbs samplers for a hierarchical random effects model
We consider Gibbs and block Gibbs samplers for a Bayesian hierarchical
version of the one-way random effects model. Drift and minorization conditions
are established for the underlying Markov chains. The drift and minorization
are used in conjunction with results from J. S. Rosenthal [J. Amer. Statist.
Assoc. 90 (1995) 558-566] and G. O. Roberts and R. L. Tweedie [Stochastic
Process. Appl. 80 (1999) 211-229] to construct analytical upper bounds on the
distance to stationarity. These lead to upper bounds on the amount of burn-in
that is required to get the chain within a prespecified (total variation)
distance of the stationary distribution. The results are illustrated with a
numerical example
Batch means and spectral variance estimators in Markov chain Monte Carlo
Calculating a Monte Carlo standard error (MCSE) is an important step in the
statistical analysis of the simulation output obtained from a Markov chain
Monte Carlo experiment. An MCSE is usually based on an estimate of the variance
of the asymptotic normal distribution. We consider spectral and batch means
methods for estimating this variance. In particular, we establish conditions
which guarantee that these estimators are strongly consistent as the simulation
effort increases. In addition, for the batch means and overlapping batch means
methods we establish conditions ensuring consistency in the mean-square sense
which in turn allows us to calculate the optimal batch size up to a constant of
proportionality. Finally, we examine the empirical finite-sample properties of
spectral variance and batch means estimators and provide recommendations for
practitioners.Comment: Published in at http://dx.doi.org/10.1214/09-AOS735 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …