21,457 research outputs found
Sigma Point Belief Propagation
The sigma point (SP) filter, also known as unscented Kalman filter, is an
attractive alternative to the extended Kalman filter and the particle filter.
Here, we extend the SP filter to nonsequential Bayesian inference corresponding
to loopy factor graphs. We propose sigma point belief propagation (SPBP) as a
low-complexity approximation of the belief propagation (BP) message passing
scheme. SPBP achieves approximate marginalizations of posterior distributions
corresponding to (generally) loopy factor graphs. It is well suited for
decentralized inference because of its low communication requirements. For a
decentralized, dynamic sensor localization problem, we demonstrate that SPBP
can outperform nonparametric (particle-based) BP while requiring significantly
less computations and communications.Comment: 5 pages, 1 figur
Block Belief Propagation for Parameter Learning in Markov Random Fields
Traditional learning methods for training Markov random fields require doing
inference over all variables to compute the likelihood gradient. The iteration
complexity for those methods therefore scales with the size of the graphical
models. In this paper, we propose \emph{block belief propagation learning}
(BBPL), which uses block-coordinate updates of approximate marginals to compute
approximate gradients, removing the need to compute inference on the entire
graphical model. Thus, the iteration complexity of BBPL does not scale with the
size of the graphs. We prove that the method converges to the same solution as
that obtained by using full inference per iteration, despite these
approximations, and we empirically demonstrate its scalability improvements
over standard training methods.Comment: Accepted to AAAI 201
Lifted Relax, Compensate and then Recover: From Approximate to Exact Lifted Probabilistic Inference
We propose an approach to lifted approximate inference for first-order
probabilistic models, such as Markov logic networks. It is based on performing
exact lifted inference in a simplified first-order model, which is found by
relaxing first-order constraints, and then compensating for the relaxation.
These simplified models can be incrementally improved by carefully recovering
constraints that have been relaxed, also at the first-order level. This leads
to a spectrum of approximations, with lifted belief propagation on one end, and
exact lifted inference on the other. We discuss how relaxation, compensation,
and recovery can be performed, all at the firstorder level, and show
empirically that our approach substantially improves on the approximations of
both propositional solvers and lifted belief propagation.Comment: Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012
- …