2,531 research outputs found
Block Belief Propagation for Parameter Learning in Markov Random Fields
Traditional learning methods for training Markov random fields require doing
inference over all variables to compute the likelihood gradient. The iteration
complexity for those methods therefore scales with the size of the graphical
models. In this paper, we propose \emph{block belief propagation learning}
(BBPL), which uses block-coordinate updates of approximate marginals to compute
approximate gradients, removing the need to compute inference on the entire
graphical model. Thus, the iteration complexity of BBPL does not scale with the
size of the graphs. We prove that the method converges to the same solution as
that obtained by using full inference per iteration, despite these
approximations, and we empirically demonstrate its scalability improvements
over standard training methods.Comment: Accepted to AAAI 201
Towards Accurate One-Stage Object Detection with AP-Loss
One-stage object detectors are trained by optimizing classification-loss and
localization-loss simultaneously, with the former suffering much from extreme
foreground-background class imbalance issue due to the large number of anchors.
This paper alleviates this issue by proposing a novel framework to replace the
classification task in one-stage detectors with a ranking task, and adopting
the Average-Precision loss (AP-loss) for the ranking problem. Due to its
non-differentiability and non-convexity, the AP-loss cannot be optimized
directly. For this purpose, we develop a novel optimization algorithm, which
seamlessly combines the error-driven update scheme in perceptron learning and
backpropagation algorithm in deep networks. We verify good convergence property
of the proposed algorithm theoretically and empirically. Experimental results
demonstrate notable performance improvement in state-of-the-art one-stage
detectors based on AP-loss over different kinds of classification-losses on
various benchmarks, without changing the network architectures. Code is
available at https://github.com/cccorn/AP-loss.Comment: 13 pages, 7 figures, 4 tables, main paper + supplementary material,
accepted to CVPR 201
Learning to Approximate a Bregman Divergence
Bregman divergences generalize measures such as the squared Euclidean
distance and the KL divergence, and arise throughout many areas of machine
learning. In this paper, we focus on the problem of approximating an arbitrary
Bregman divergence from supervision, and we provide a well-principled approach
to analyzing such approximations. We develop a formulation and algorithm for
learning arbitrary Bregman divergences based on approximating their underlying
convex generating function via a piecewise linear function. We provide
theoretical approximation bounds using our parameterization and show that the
generalization error for metric learning using our framework
matches the known generalization error in the strictly less general Mahalanobis
metric learning setting. We further demonstrate empirically that our method
performs well in comparison to existing metric learning methods, particularly
for clustering and ranking problems.Comment: 19 pages, 4 figure
Measuring efficiency with neural networks. An application to the public sector
In this note we propose the artificial neural networks for measuring efficiency as a complementary tool to the common techniques of the efficiency literature. In the application to the public sector we find that the neural network allows to conclude more robust results to rank decision-making units.DEA
- âŚ