1 research outputs found
Inference by Minimizing Size, Divergence, or their Sum
We speed up marginal inference by ignoring factors that do not significantly
contribute to overall accuracy. In order to pick a suitable subset of factors
to ignore, we propose three schemes: minimizing the number of model factors
under a bound on the KL divergence between pruned and full models; minimizing
the KL divergence under a bound on factor count; and minimizing the weighted
sum of KL divergence and factor count. All three problems are solved using an
approximation of the KL divergence than can be calculated in terms of marginals
computed on a simple seed graph. Applied to synthetic image denoising and to
three different types of NLP parsing models, this technique performs marginal
inference up to 11 times faster than loopy BP, with graph sizes reduced up to
98%-at comparable error in marginals and parsing accuracy. We also show that
minimizing the weighted sum of divergence and size is substantially faster than
minimizing either of the other objectives based on the approximation to
divergence presented here.Comment: Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010