1,434 research outputs found
Belief Propagation for Linear Programming
Belief Propagation (BP) is a popular, distributed heuristic for performing
MAP computations in Graphical Models. BP can be interpreted, from a variational
perspective, as minimizing the Bethe Free Energy (BFE). BP can also be used to
solve a special class of Linear Programming (LP) problems. For this class of
problems, MAP inference can be stated as an integer LP with an LP relaxation
that coincides with minimization of the BFE at ``zero temperature". We
generalize these prior results and establish a tight characterization of the LP
problems that can be formulated as an equivalent LP relaxation of MAP
inference. Moreover, we suggest an efficient, iterative annealing BP algorithm
for solving this broader class of LP problems. We demonstrate the algorithm's
performance on a set of weighted matching problems by using it as a cutting
plane method to solve a sequence of LPs tightened by adding ``blossom''
inequalities.Comment: To appear in ISIT 201
Phase Transitions in Semidefinite Relaxations
Statistical inference problems arising within signal processing, data mining,
and machine learning naturally give rise to hard combinatorial optimization
problems. These problems become intractable when the dimensionality of the data
is large, as is often the case for modern datasets. A popular idea is to
construct convex relaxations of these combinatorial problems, which can be
solved efficiently for large scale datasets.
Semidefinite programming (SDP) relaxations are among the most powerful
methods in this family, and are surprisingly well-suited for a broad range of
problems where data take the form of matrices or graphs. It has been observed
several times that, when the `statistical noise' is small enough, SDP
relaxations correctly detect the underlying combinatorial structures.
In this paper we develop asymptotic predictions for several `detection
thresholds,' as well as for the estimation error above these thresholds. We
study some classical SDP relaxations for statistical problems motivated by
graph synchronization and community detection in networks. We map these
optimization problems to statistical mechanics models with vector spins, and
use non-rigorous techniques from statistical mechanics to characterize the
corresponding phase transitions. Our results clarify the effectiveness of SDP
relaxations in solving high-dimensional statistical problems.Comment: 71 pages, 24 pdf figure
Decomposition Methods for Large Scale LP Decoding
When binary linear error-correcting codes are used over symmetric channels, a
relaxed version of the maximum likelihood decoding problem can be stated as a
linear program (LP). This LP decoder can be used to decode error-correcting
codes at bit-error-rates comparable to state-of-the-art belief propagation (BP)
decoders, but with significantly stronger theoretical guarantees. However, LP
decoding when implemented with standard LP solvers does not easily scale to the
block lengths of modern error correcting codes. In this paper we draw on
decomposition methods from optimization theory, specifically the Alternating
Directions Method of Multipliers (ADMM), to develop efficient distributed
algorithms for LP decoding.
The key enabling technical result is a "two-slice" characterization of the
geometry of the parity polytope, which is the convex hull of all codewords of a
single parity check code. This new characterization simplifies the
representation of points in the polytope. Using this simplification, we develop
an efficient algorithm for Euclidean norm projection onto the parity polytope.
This projection is required by ADMM and allows us to use LP decoding, with all
its theoretical guarantees, to decode large-scale error correcting codes
efficiently.
We present numerical results for LDPC codes of lengths more than 1000. The
waterfall region of LP decoding is seen to initiate at a slightly higher
signal-to-noise ratio than for sum-product BP, however an error floor is not
observed for LP decoding, which is not the case for BP. Our implementation of
LP decoding using ADMM executes as fast as our baseline sum-product BP decoder,
is fully parallelizable, and can be seen to implement a type of message-passing
with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the
49th Annual Allerton Conference, September 2011. This version to appear in
IEEE Transactions on Information Theor
Approximating the Permanent with Fractional Belief Propagation
We discuss schemes for exact and approximate computations of permanents, and
compare them with each other. Specifically, we analyze the Belief Propagation
(BP) approach and its Fractional Belief Propagation (FBP) generalization for
computing the permanent of a non-negative matrix. Known bounds and conjectures
are verified in experiments, and some new theoretical relations, bounds and
conjectures are proposed. The Fractional Free Energy (FFE) functional is
parameterized by a scalar parameter , where
corresponds to the BP limit and corresponds to the exclusion
principle (but ignoring perfect matching constraints) Mean-Field (MF) limit.
FFE shows monotonicity and continuity with respect to . For every
non-negative matrix, we define its special value to be the
for which the minimum of the -parameterized FFE functional is
equal to the permanent of the matrix, where the lower and upper bounds of the
-interval corresponds to respective bounds for the permanent. Our
experimental analysis suggests that the distribution of varies for
different ensembles but always lies within the interval.
Moreover, for all ensembles considered the behavior of is highly
distinctive, offering an emprirical practical guidance for estimating
permanents of non-negative matrices via the FFE approach.Comment: 42 pages, 14 figure
Zero-Temperature Limit of a Convergent Algorithm to Minimize the Bethe Free Energy
After the discovery that fixed points of loopy belief propagation coincide
with stationary points of the Bethe free energy, several researchers proposed
provably convergent algorithms to directly minimize the Bethe free energy.
These algorithms were formulated only for non-zero temperature (thus finding
fixed points of the sum-product algorithm) and their possible extension to zero
temperature is not obvious. We present the zero-temperature limit of the
double-loop algorithm by Heskes, which converges a max-product fixed point. The
inner loop of this algorithm is max-sum diffusion. Under certain conditions,
the algorithm combines the complementary advantages of the max-product belief
propagation and max-sum diffusion (LP relaxation): it yields good approximation
of both ground states and max-marginals.Comment: Research Repor
- …