Many modern applications collect highly imbalanced categorical data, with
some categories relatively rare. Bayesian hierarchical models combat data
sparsity by borrowing information, while also quantifying uncertainty. However,
posterior computation presents a fundamental barrier to routine use; a single
class of algorithms does not work well in all settings and practitioners waste
time trying different types of MCMC approaches. This article was motivated by
an application to quantitative advertising in which we encountered extremely
poor computational performance for common data augmentation MCMC algorithms but
obtained excellent performance for adaptive Metropolis. To obtain a deeper
understanding of this behavior, we give strong theory results on computational
complexity in an infinitely imbalanced asymptotic regime. Our results show
computational complexity of Metropolis is logarithmic in sample size, while
data augmentation is polynomial in sample size. The root cause of poor
performance of data augmentation is a discrepancy between the rates at which
the target density and MCMC step sizes concentrate. In general, MCMC algorithms
that have a similar discrepancy will fail in large samples - a result with
substantial practical impact