12,656 research outputs found
Deeply Learning the Messages in Message Passing Inference
Deep structured output learning shows great promise in tasks like semantic
image segmentation. We proffer a new, efficient deep structured model learning
scheme, in which we show how deep Convolutional Neural Networks (CNNs) can be
used to estimate the messages in message passing inference for structured
prediction with Conditional Random Fields (CRFs). With such CNN message
estimators, we obviate the need to learn or evaluate potential functions for
message calculation. This confers significant efficiency for learning, since
otherwise when performing structured learning for a CRF with CNN potentials it
is necessary to undertake expensive inference for every stochastic gradient
iteration. The network output dimension for message estimation is the same as
the number of classes, in contrast to the network output for general CNN
potential functions in CRFs, which is exponential in the order of the
potentials. Hence CNN message learning has fewer network parameters and is more
scalable for cases that a large number of classes are involved. We apply our
method to semantic image segmentation on the PASCAL VOC 2012 dataset. We
achieve an intersection-over-union score of 73.4 on its test set, which is the
best reported result for methods using the VOC training images alone. This
impressive performance demonstrates the effectiveness and usefulness of our CNN
message learning method.Comment: 11 pages. Appearing in Proc. The Twenty-ninth Annual Conference on
Neural Information Processing Systems (NIPS), 2015, Montreal, Canad
Consensus Message Passing for Layered Graphical Models
Generative models provide a powerful framework for probabilistic reasoning.
However, in many domains their use has been hampered by the practical
difficulties of inference. This is particularly the case in computer vision,
where models of the imaging process tend to be large, loopy and layered. For
this reason bottom-up conditional models have traditionally dominated in such
domains. We find that widely-used, general-purpose message passing inference
algorithms such as Expectation Propagation (EP) and Variational Message Passing
(VMP) fail on the simplest of vision models. With these models in mind, we
introduce a modification to message passing that learns to exploit their
layered structure by passing 'consensus' messages that guide inference towards
good solutions. Experiments on a variety of problems show that the proposed
technique leads to significantly more accurate inference results, not only when
compared to standard EP and VMP, but also when compared to competitive
bottom-up conditional models.Comment: Appearing in Proceedings of the 18th International Conference on
Artificial Intelligence and Statistics (AISTATS) 201
Block Belief Propagation for Parameter Learning in Markov Random Fields
Traditional learning methods for training Markov random fields require doing
inference over all variables to compute the likelihood gradient. The iteration
complexity for those methods therefore scales with the size of the graphical
models. In this paper, we propose \emph{block belief propagation learning}
(BBPL), which uses block-coordinate updates of approximate marginals to compute
approximate gradients, removing the need to compute inference on the entire
graphical model. Thus, the iteration complexity of BBPL does not scale with the
size of the graphs. We prove that the method converges to the same solution as
that obtained by using full inference per iteration, despite these
approximations, and we empirically demonstrate its scalability improvements
over standard training methods.Comment: Accepted to AAAI 201
- …