1 research outputs found
Refined -Divergence Variational Inference via Rejection Sampling
We present an approximate inference method, based on a synergistic
combination of R\'enyi -divergence variational inference (RDVI) and
rejection sampling (RS). RDVI is based on minimization of R\'enyi
-divergence between the true distribution and a
variational approximation ; RS draws samples from a distribution using a proposal , s.t. . Our inference method is based on a crucial observation that
equals where is the optimal value
of the RS constant for a given proposal . This enables us to
develop a \emph{two-stage} hybrid inference algorithm. Stage-1 performs RDVI to
learn by minimizing an estimator of , and uses the
learned to find an (approximately) optimal .
Stage-2 performs RS using the constant to improve the
approximate distribution and obtain a sample-based approximation. We
prove that this two-stage method allows us to learn considerably more accurate
approximations of the target distribution as compared to RDVI. We demonstrate
our method's efficacy via several experiments on synthetic and real datasets.Comment: 6 pages, 1 figur