In Variational Inference (VI), coordinate-ascent and gradient-based
approaches are two major types of algorithms for approximating
difficult-to-compute probability densities. In real-world implementations of
complex models, Monte Carlo methods are widely used to estimate expectations in
coordinate-ascent approaches and gradients in derivative-driven ones. We
discuss a Monte Carlo Co-ordinate Ascent VI (MC-CAVI) algorithm that makes use
of Markov chain Monte Carlo (MCMC) methods in the calculation of expectations
required within Co-ordinate Ascent VI (CAVI). We show that, under regularity
conditions, an MC-CAVI recursion will get arbitrarily close to a maximiser of
the evidence lower bound (ELBO) with any given high probability. In numerical
examples, the performance of MC-CAVI algorithm is compared with that of MCMC
and -- as a representative of derivative-based VI methods -- of Black Box VI
(BBVI). We discuss and demonstrate MC-CAVI's suitability for models with hard
constraints in simulated and real examples. We compare MC-CAVI's performance
with that of MCMC in an important complex model used in Nuclear Magnetic
Resonance (NMR) spectroscopy data analysis -- BBVI is nearly impossible to be
employed in this setting due to the hard constraints involved in the model