27 research outputs found

    Polynomial Linear Programming with Gaussian Belief Propagation

    Full text link
    Interior-point methods are state-of-the-art algorithms for solving linear programming (LP) problems with polynomial complexity. Specifically, the Karmarkar algorithm typically solves LP problems in time O(n^{3.5}), where nn is the number of unknown variables. Karmarkar's celebrated algorithm is known to be an instance of the log-barrier method using the Newton iteration. The main computational overhead of this method is in inverting the Hessian matrix of the Newton iteration. In this contribution, we propose the application of the Gaussian belief propagation (GaBP) algorithm as part of an efficient and distributed LP solver that exploits the sparse and symmetric structure of the Hessian matrix and avoids the need for direct matrix inversion. This approach shifts the computation from realm of linear algebra to that of probabilistic inference on graphical models, thus applying GaBP as an efficient inference engine. Our construction is general and can be used for any interior-point algorithm which uses the Newton method, including non-linear program solvers.Comment: 7 pages, 1 figure, appeared in the 46th Annual Allerton Conference on Communication, Control and Computing, Allerton House, Illinois, Sept. 200

    Learning Deep Structured Models

    Full text link
    Many problems in real-world applications involve predicting several random variables which are statistically related. Markov random fields (MRFs) are a great mathematical tool to encode such relationships. The goal of this paper is to combine MRFs with deep learning algorithms to estimate complex representations while taking into account the dependencies between the output random variables. Towards this goal, we propose a training algorithm that is able to learn structured models jointly with deep features that form the MRF potentials. Our approach is efficient as it blends learning and inference and makes use of GPU acceleration. We demonstrate the effectiveness of our algorithm in the tasks of predicting words from noisy images, as well as multi-class classification of Flickr photographs. We show that joint learning of the deep features and the MRF parameters results in significant performance gains.Comment: 11 pages including referenc

    Exactness of Belief Propagation for Some Graphical Models with Loops

    Full text link
    It is well known that an arbitrary graphical model of statistical inference defined on a tree, i.e. on a graph without loops, is solved exactly and efficiently by an iterative Belief Propagation (BP) algorithm convergent to unique minimum of the so-called Bethe free energy functional. For a general graphical model on a loopy graph the functional may show multiple minima, the iterative BP algorithm may converge to one of the minima or may not converge at all, and the global minimum of the Bethe free energy functional is not guaranteed to correspond to the optimal Maximum-Likelihood (ML) solution in the zero-temperature limit. However, there are exceptions to this general rule, discussed in \cite{05KW} and \cite{08BSS} in two different contexts, where zero-temperature version of the BP algorithm finds ML solution for special models on graphs with loops. These two models share a key feature: their ML solutions can be found by an efficient Linear Programming (LP) algorithm with a Totally-Uni-Modular (TUM) matrix of constraints. Generalizing the two models we consider a class of graphical models reducible in the zero temperature limit to LP with TUM constraints. Assuming that a gedanken algorithm, g-BP, funding the global minimum of the Bethe free energy is available we show that in the limit of zero temperature g-BP outputs the ML solution. Our consideration is based on equivalence established between gapless Linear Programming (LP) relaxation of the graphical model in the T→0T\to 0 limit and respective LP version of the Bethe-Free energy minimization.Comment: 12 pages, 1 figure, submitted to JSTA

    On the exactness of the cavity method for Weighted b-Matchings on Arbitrary Graphs and its Relation to Linear Programs

    Full text link
    We consider the general problem of finding the minimum weight b-matching on arbitrary graphs. We prove that, whenever the linear programming relaxation of the problem has no fractional solutions, then the cavity or belief propagation equations converge to the correct solution both for synchronous and asynchronous updating

    A mean field method with correlations determined by linear response

    Full text link
    We introduce a new mean-field approximation based on the reconciliation of maximum entropy and linear response for correlations in the cluster variation method. Within a general formalism that includes previous mean-field methods, we derive formulas improving upon, e.g., the Bethe approximation and the Sessak-Monasson result at high temperature. Applying the method to direct and inverse Ising problems, we find improvements over standard implementations.Comment: 15 pages, 8 figures, 9 appendices, significant expansion on versions v1 and v
    corecore