7,408 research outputs found
Bounding errors of Expectation-Propagation
Expectation Propagation is a very popular algorithm for variational
inference, but comes with few theoretical guarantees. In this article, we prove
that the approximation errors made by EP can be bounded. Our bounds have an
asymptotic interpretation in the number of datapoints, which allows us to
study EP's convergence with respect to the true posterior. In particular, we
show that EP converges at a rate of for the mean, up to
an order of magnitude faster than the traditional Gaussian approximation at the
mode. We also give similar asymptotic expansions for moments of order 2 to 4,
as well as excess Kullback-Leibler cost (defined as the additional KL cost
incurred by using EP rather than the ideal Gaussian approximation). All these
expansions highlight the superior convergence properties of EP. Our approach
for deriving those results is likely applicable to many similar approximate
inference methods. In addition, we introduce bounds on the moments of
log-concave distributions that may be of independent interest.Comment: Accepted and published at NIPS 201
- …