138 research outputs found
Efficient computational strategies for doubly intractable problems with applications to Bayesian social networks
Powerful ideas recently appeared in the literature are adjusted and combined
to design improved samplers for Bayesian exponential random graph models.
Different forms of adaptive Metropolis-Hastings proposals (vertical, horizontal
and rectangular) are tested and combined with the Delayed rejection (DR)
strategy with the aim of reducing the variance of the resulting Markov chain
Monte Carlo estimators for a given computational time. In the examples treated
in this paper the best combination, namely horizontal adaptation with delayed
rejection, leads to a variance reduction that varies between 92% and 144%
relative to the adaptive direction sampling approximate exchange algorithm of
Caimo and Friel (2011). These results correspond to an increased performance
which varies from 10% to 94% if we take simulation time into account. The
highest improvements are obtained when highly correlated posterior
distributions are considered.Comment: 23 pages, 8 figures. Accepted to appear in Statistics and Computin
Efficient estimate of Bayes factors from Reversible Jump output
We exend Meng and Wong (1996) identity from a fixed to a varying dimentional setting. The identity is a very powerful tool to estimate ratios of normalizing constants and thus can be used to evaluate Bayes factors. The extention is driven by the reversibler jump algorithm so that the output from the semplar can be directly used to efficiently estimate the required Bayes factor. Two applications, involving linear and logistic regression models, illustrate the advantages of the suggested approach with respect to alternatives previously proposed in the literature.Bayes factor; Bayesian modeel choice; Marginal likelihood; Markov chain Monte Carlo; Reversible jump
An extension of Peskun ordering to continuous time Markov chains
Peskun ordering is a partial ordering defined on the space of transition matrices of discrete time Markov chains. If the Markov chains are reversible with respect to a common stationary distribution "greek Pi", Peskun ordering implies an ordering on the asymptotic variances of the resulting Markov chain Monte Carlo estimators of integrals with respect to "greek Pi". Peskun ordering is also relevant in the framework of time-invariance estimating equations in that it provides a necessary condition for ordering the asymptotic variances of the resulting estimators. In this paper Peskun ordering is extended from discrete time to continuous time Markov chains. Key words and phrases: Peskun ordering, Covariance ordering, Effciency ordering, MCMC, time-invariance estimating equations, asymptotic variance, continuous time Markov chains.
Parallel hierarchical sampling:a general-purpose class of multiple-chains MCMC algorithms
This paper introduces the Parallel Hierarchical Sampler (PHS), a class of Markov chain Monte Carlo algorithms using several interacting chains having the same target distribution but different mixing properties. Unlike any single-chain MCMC algorithm, upon reaching stationarity one of the PHS chains, which we call the “mother” chain, attains exact Monte Carlo sampling of the target distribution of interest. We empirically show that this translates in a dramatic improvement in the sampler’s performance with respect to single-chain MCMC algorithms. Convergence of the PHS joint transition kernel is proved and its relationships with single-chain samplers, Parallel Tempering (PT) and variable augmentation algorithms are discussed. We then provide two illustrative examples comparing the accuracy of PHS with
Bayesian estimate of credit risk via MCMC with delayed rejection
We develop a Bayesian hierarchical logistic regression model to predict the credit risk of companiers classified in different sectors. Explanatory variables derived by experts from balance-sheets are included. Markov chain Monte Carlo (MCMC) methods are used to estimate the proposed model. In particular we show how the delaying rejection strategy outperforms the standart Metrtopolis-Hastings algorithm in terms of asymptotic efficiency of the resulting estimates. The advantages of our over others proposed in the literature are discussed and tested via cross-validation procedures.Asymptotic efficiency of MCMC estimates, Creadit risk, Default risk, Delayng rejection, Hierarchical logistic regression, Metropolis-Hastings algorithm
Bridge estimation of the probability density at a point
Bridge estimation, as described by Meng and Wong in 1996, is used to estimate the value taken by a probability density at a point in the state space. When the normalisation of the prior density is known, this value may be used to estimate a Bayes factor. It is shown that the multi-block Metropolis-Hastings estimators of Chib and Jeliazkov (2001) are bridge estimators. This identification leads to more efficient estimators for the quantity of interest.Bayes factor, Bridge estimators, Marginal likelihood, Markov chain Monte Carlo, Metropolis-Hastings algorithms
Density estimators through Zero Variance Markov Chain Monte Carlo
A Markov Chain Monte Carlo method is proposed for the pointwise evaluation of a density whose normalizing constant is not known. This method was introduced in the physics literature by Assaraf et al (2007). Conditions for unbiasedness of the estimator are derived. A central limit theorem is also proved under regularity conditions. The new idea is tested on some toy-examples.Density estimator, Fundamental solution, MCMC simulation
Coalescence time and second largest eigenvalue modulus in the monotone reversible case
If T is the coalescence time of the Propp and Wilson [15], perfect simulation algorithm, the aim of this paper is to show that T depends on the second largest eigenvalue modulus of the transition matrix of the underlying Markov chain. This gives a relationship between the ordering based on the speed of convergence to stationarity in total variation distance and the ordering dened in terms of speed of coalescence in perfect simulation. Key words and phrases: Peskun ordering, Covariance ordering, Effciency ordering, MCMC, time-invariance estimating equations, asymptotic variance, continuous time Markov chains.
- …