research

Rényi divergence variational inference

Abstract

This paper introduces the variational Reˊnyi bound\textit{variational Rényi bound} (VR) that extends traditional variational inference to Rényi’s α\alpha-divergences. This new family of variational methods unifies a number of existing approaches, and enables a smooth interpolation from the evidence lower-bound to the log (marginal) likelihood that is controlled by the value of α\alpha that parametrises the divergence. The reparameterization trick, Monte Carlo approximation and stochastic optimisation methods are deployed to obtain a tractable and unified framework for optimisation. We further consider negative α\alpha values and propose a novel variational inference method as a new special case in the proposed framework. Experiments on Bayesian neural networks and variational auto-encoders demonstrate the wide applicability of the VR bound.YL thanks the Schlumberger Foundation FFTF fellowship. RET thanks EPSRC grants # EP/M026957/1 and EP/L000776/1

    Similar works