1,328 research outputs found
Belief Propagation for Linear Programming
Belief Propagation (BP) is a popular, distributed heuristic for performing
MAP computations in Graphical Models. BP can be interpreted, from a variational
perspective, as minimizing the Bethe Free Energy (BFE). BP can also be used to
solve a special class of Linear Programming (LP) problems. For this class of
problems, MAP inference can be stated as an integer LP with an LP relaxation
that coincides with minimization of the BFE at ``zero temperature". We
generalize these prior results and establish a tight characterization of the LP
problems that can be formulated as an equivalent LP relaxation of MAP
inference. Moreover, we suggest an efficient, iterative annealing BP algorithm
for solving this broader class of LP problems. We demonstrate the algorithm's
performance on a set of weighted matching problems by using it as a cutting
plane method to solve a sequence of LPs tightened by adding ``blossom''
inequalities.Comment: To appear in ISIT 201
Polynomial Linear Programming with Gaussian Belief Propagation
Interior-point methods are state-of-the-art algorithms for solving linear
programming (LP) problems with polynomial complexity. Specifically, the
Karmarkar algorithm typically solves LP problems in time O(n^{3.5}), where
is the number of unknown variables. Karmarkar's celebrated algorithm is known
to be an instance of the log-barrier method using the Newton iteration. The
main computational overhead of this method is in inverting the Hessian matrix
of the Newton iteration. In this contribution, we propose the application of
the Gaussian belief propagation (GaBP) algorithm as part of an efficient and
distributed LP solver that exploits the sparse and symmetric structure of the
Hessian matrix and avoids the need for direct matrix inversion. This approach
shifts the computation from realm of linear algebra to that of probabilistic
inference on graphical models, thus applying GaBP as an efficient inference
engine. Our construction is general and can be used for any interior-point
algorithm which uses the Newton method, including non-linear program solvers.Comment: 7 pages, 1 figure, appeared in the 46th Annual Allerton Conference on
Communication, Control and Computing, Allerton House, Illinois, Sept. 200
Getting Feasible Variable Estimates From Infeasible Ones: MRF Local Polytope Study
This paper proposes a method for construction of approximate feasible primal
solutions from dual ones for large-scale optimization problems possessing
certain separability properties. Whereas infeasible primal estimates can
typically be produced from (sub-)gradients of the dual function, it is often
not easy to project them to the primal feasible set, since the projection
itself has a complexity comparable to the complexity of the initial problem. We
propose an alternative efficient method to obtain feasibility and show that its
properties influencing the convergence to the optimum are similar to the
properties of the Euclidean projection. We apply our method to the local
polytope relaxation of inference problems for Markov Random Fields and
demonstrate its superiority over existing methods.Comment: 20 page, 4 figure
Inference for Generalized Linear Models via Alternating Directions and Bethe Free Energy Minimization
Generalized Linear Models (GLMs), where a random vector is
observed through a noisy, possibly nonlinear, function of a linear transform
arise in a range of applications in nonlinear
filtering and regression. Approximate Message Passing (AMP) methods, based on
loopy belief propagation, are a promising class of approaches for approximate
inference in these models. AMP methods are computationally simple, general, and
admit precise analyses with testable conditions for optimality for large i.i.d.
transforms . However, the algorithms can easily diverge for general
. This paper presents a convergent approach to the generalized AMP
(GAMP) algorithm based on direct minimization of a large-system limit
approximation of the Bethe Free Energy (LSL-BFE). The proposed method uses a
double-loop procedure, where the outer loop successively linearizes the LSL-BFE
and the inner loop minimizes the linearized LSL-BFE using the Alternating
Direction Method of Multipliers (ADMM). The proposed method, called ADMM-GAMP,
is similar in structure to the original GAMP method, but with an additional
least-squares minimization. It is shown that for strictly convex, smooth
penalties, ADMM-GAMP is guaranteed to converge to a local minima of the
LSL-BFE, thus providing a convergent alternative to GAMP that is stable under
arbitrary transforms. Simulations are also presented that demonstrate the
robustness of the method for non-convex penalties as well
- …