3,691 research outputs found
Belief propagation for optimal edge cover in the random complete graph
We apply the objective method of Aldous to the problem of finding the
minimum-cost edge cover of the complete graph with random independent and
identically distributed edge costs. The limit, as the number of vertices goes
to infinity, of the expected minimum cost for this problem is known via a
combinatorial approach of Hessler and W\"{a}stlund. We provide a proof of this
result using the machinery of the objective method and local weak convergence,
which was used to prove the limit of the random assignment problem.
A proof via the objective method is useful because it provides us with more
information on the nature of the edge's incident on a typical root in the
minimum-cost edge cover. We further show that a belief propagation algorithm
converges asymptotically to the optimal solution. This can be applied in a
computational linguistics problem of semantic projection. The belief
propagation algorithm yields a near optimal solution with lesser complexity
than the known best algorithms designed for optimality in worst-case settings.Comment: Published in at http://dx.doi.org/10.1214/13-AAP981 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Optimal Inference in Crowdsourced Classification via Belief Propagation
Crowdsourcing systems are popular for solving large-scale labelling tasks
with low-paid workers. We study the problem of recovering the true labels from
the possibly erroneous crowdsourced labels under the popular Dawid-Skene model.
To address this inference problem, several algorithms have recently been
proposed, but the best known guarantee is still significantly larger than the
fundamental limit. We close this gap by introducing a tighter lower bound on
the fundamental limit and proving that Belief Propagation (BP) exactly matches
this lower bound. The guaranteed optimality of BP is the strongest in the sense
that it is information-theoretically impossible for any other algorithm to
correctly label a larger fraction of the tasks. Experimental results suggest
that BP is close to optimal for all regimes considered and improves upon
competing state-of-the-art algorithms.Comment: This article is partially based on preliminary results published in
the proceeding of the 33rd International Conference on Machine Learning (ICML
2016
The Lazy Flipper: MAP Inference in Higher-Order Graphical Models by Depth-limited Exhaustive Search
This article presents a new search algorithm for the NP-hard problem of
optimizing functions of binary variables that decompose according to a
graphical model. It can be applied to models of any order and structure. The
main novelty is a technique to constrain the search space based on the topology
of the model. When pursued to the full search depth, the algorithm is
guaranteed to converge to a global optimum, passing through a series of
monotonously improving local optima that are guaranteed to be optimal within a
given and increasing Hamming distance. For a search depth of 1, it specializes
to Iterated Conditional Modes. Between these extremes, a useful tradeoff
between approximation quality and runtime is established. Experiments on models
derived from both illustrative and real problems show that approximations found
with limited search depth match or improve those obtained by state-of-the-art
methods based on message passing and linear programming.Comment: C++ Source Code available from
http://hci.iwr.uni-heidelberg.de/software.ph
Analysis of the Min-Sum Algorithm for Packing and Covering Problems via Linear Programming
Message-passing algorithms based on belief-propagation (BP) are successfully
used in many applications including decoding error correcting codes and solving
constraint satisfaction and inference problems. BP-based algorithms operate
over graph representations, called factor graphs, that are used to model the
input. Although in many cases BP-based algorithms exhibit impressive empirical
results, not much has been proved when the factor graphs have cycles.
This work deals with packing and covering integer programs in which the
constraint matrix is zero-one, the constraint vector is integral, and the
variables are subject to box constraints. We study the performance of the
min-sum algorithm when applied to the corresponding factor graph models of
packing and covering LPs.
We compare the solutions computed by the min-sum algorithm for packing and
covering problems to the optimal solutions of the corresponding linear
programming (LP) relaxations. In particular, we prove that if the LP has an
optimal fractional solution, then for each fractional component, the min-sum
algorithm either computes multiple solutions or the solution oscillates below
and above the fraction. This implies that the min-sum algorithm computes the
optimal integral solution only if the LP has a unique optimal solution that is
integral.
The converse is not true in general. For a special case of packing and
covering problems, we prove that if the LP has a unique optimal solution that
is integral and on the boundary of the box constraints, then the min-sum
algorithm computes the optimal solution in pseudo-polynomial time.
Our results unify and extend recent results for the maximum weight matching
problem by [Sanghavi et al.,'2011] and [Bayati et al., 2011] and for the
maximum weight independent set problem [Sanghavi et al.'2009]
Coarse-to-Fine Lifted MAP Inference in Computer Vision
There is a vast body of theoretical research on lifted inference in
probabilistic graphical models (PGMs). However, few demonstrations exist where
lifting is applied in conjunction with top of the line applied algorithms. We
pursue the applicability of lifted inference for computer vision (CV), with the
insight that a globally optimal (MAP) labeling will likely have the same label
for two symmetric pixels. The success of our approach lies in efficiently
handling a distinct unary potential on every node (pixel), typical of CV
applications. This allows us to lift the large class of algorithms that model a
CV problem via PGM inference. We propose a generic template for coarse-to-fine
(C2F) inference in CV, which progressively refines an initial coarsely lifted
PGM for varying quality-time trade-offs. We demonstrate the performance of C2F
inference by developing lifted versions of two near state-of-the-art CV
algorithms for stereo vision and interactive image segmentation. We find that,
against flat algorithms, the lifted versions have a much superior anytime
performance, without any loss in final solution quality.Comment: Published in IJCAI 201
Iterative Bayesian Learning for Crowdsourced Regression
Crowdsourcing platforms emerged as popular venues for purchasing human
intelligence at low cost for large volume of tasks. As many low-paid workers
are prone to give noisy answers, a common practice is to add redundancy by
assigning multiple workers to each task and then simply average out these
answers. However, to fully harness the wisdom of the crowd, one needs to learn
the heterogeneous quality of each worker. We resolve this fundamental challenge
in crowdsourced regression tasks, i.e., the answer takes continuous labels,
where identifying good or bad workers becomes much more non-trivial compared to
a classification setting of discrete labels. In particular, we introduce a
Bayesian iterative scheme and show that it provably achieves the optimal mean
squared error. Our evaluations on synthetic and real-world datasets support our
theoretical results and show the superiority of the proposed scheme
- …