60 research outputs found

    Gaussian Belief with dynamic data and in dynamic network

    Full text link
    In this paper we analyse Belief Propagation over a Gaussian model in a dynamic environment. Recently, this has been proposed as a method to average local measurement values by a distributed protocol ("Consensus Propagation", Moallemi & Van Roy, 2006), where the average is available for read-out at every single node. In the case that the underlying network is constant but the values to be averaged fluctuate ("dynamic data"), convergence and accuracy are determined by the spectral properties of an associated Ruelle-Perron-Frobenius operator. For Gaussian models on Erdos-Renyi graphs, numerical computation points to a spectral gap remaining in the large-size limit, implying exceptionally good scalability. In a model where the underlying network also fluctuates ("dynamic network"), averaging is more effective than in the dynamic data case. Altogether, this implies very good performance of these methods in very large systems, and opens a new field of statistical physics of large (and dynamic) information systems.Comment: 5 pages, 7 figure

    IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules

    Full text link
    The wide adoption of machine learning in the critical domains such as medical diagnosis, law, education had propelled the need for interpretable techniques due to the need for end users to understand the reasoning behind decisions due to learning systems. The computational intractability of interpretable learning led practitioners to design heuristic techniques, which fail to provide sound handles to tradeoff accuracy and interpretability. Motivated by the success of MaxSAT solvers over the past decade, recently MaxSAT-based approach, called MLIC, was proposed that seeks to reduce the problem of learning interpretable rules expressed in Conjunctive Normal Form (CNF) to a MaxSAT query. While MLIC was shown to achieve accuracy similar to that of other state of the art black-box classifiers while generating small interpretable CNF formulas, the runtime performance of MLIC is significantly lagging and renders approach unusable in practice. In this context, authors raised the question: Is it possible to achieve the best of both worlds, i.e., a sound framework for interpretable learning that can take advantage of MaxSAT solvers while scaling to real-world instances? In this paper, we take a step towards answering the above question in affirmation. We propose IMLI: an incremental approach to MaxSAT based framework that achieves scalable runtime performance via partition-based training methodology. Extensive experiments on benchmarks arising from UCI repository demonstrate that IMLI achieves up to three orders of magnitude runtime improvement without loss of accuracy and interpretability.Comment: 10 pages, published in the proceedings of AAAI/ACM Conference on AI, Ethics, and Society (AIES 2019

    Cavity Approach to the Spectral Density of Sparse Symmetric Random Matrices

    Full text link
    The spectral density of various ensembles of sparse symmetric random matrices is analyzed using the cavity method. We consider two cases: matrices whose associated graphs are locally tree-like, and sparse covariance matrices. We derive a closed set of equations from which the density of eigenvalues can be efficiently calculated. Within this approach, the Wigner semicircle law for Gaussian matrices and the Marcenko-Pastur law for covariance matrices are recovered easily. Our results are compared with numerical diagonalization, finding excellent agreement.Comment: 7 pages, 6 figure

    Belief Propagation for Min-Cost Network Flow: Convergence and Correctness

    Get PDF
    Distributed, iterative algorithms operating with minimal data structure while performing little computation per iteration are popularly known as message passing in the recent literature. Belief propagation (BP), a prototypical message-passing algorithm, has gained a lot of attention across disciplines, including communications, statistics, signal processing, and machine learning as an attractive, scalable, general-purpose heuristic for a wide class of optimization and statistical inference problems. Despite its empirical success, the theoretical understanding of BP is far from complete. With the goal of advancing the state of art of our understanding of BP, we study the performance of BP in the context of the capacitated minimum-cost network flow problem—a cornerstone in the development of the theory of polynomial-time algorithms for optimization problems and widely used in the practice of operations research. As the main result of this paper, we prove that BP converges to the optimal solution in pseudopolynomial time, provided that the optimal solution of the underlying network flow problem instance is unique and the problem parameters are integral. We further provide a simple modification of the BP to obtain a fully polynomial-time randomized approximation scheme (FPRAS) without requiring uniqueness of the optimal solution. This is the first instance where BP is proved to have fully polynomial running time. Our results thus provide a theoretical justification for the viability of BP as an attractive method to solve an important class of optimization problems.National Science Foundation (U.S.). Career Project (CNS 0546590)Natural Sciences and Engineering Research Council of Canada (NSERC). Postdoctoral FellowshipNational Science Foundation (U.S.). EMT Project (CCF 0829893)National Science Foundation (U.S.). (CMMI-0726733

    Fermions and Loops on Graphs. I. Loop Calculus for Determinant

    Full text link
    This paper is the first in the series devoted to evaluation of the partition function in statistical models on graphs with loops in terms of the Berezin/fermion integrals. The paper focuses on a representation of the determinant of a square matrix in terms of a finite series, where each term corresponds to a loop on the graph. The representation is based on a fermion version of the Loop Calculus, previously introduced by the authors for graphical models with finite alphabets. Our construction contains two levels. First, we represent the determinant in terms of an integral over anti-commuting Grassman variables, with some reparametrization/gauge freedom hidden in the formulation. Second, we show that a special choice of the gauge, called BP (Bethe-Peierls or Belief Propagation) gauge, yields the desired loop representation. The set of gauge-fixing BP conditions is equivalent to the Gaussian BP equations, discussed in the past as efficient (linear scaling) heuristics for estimating the covariance of a sparse positive matrix.Comment: 11 pages, 1 figure; misprints correcte

    Exactness of Belief Propagation for Some Graphical Models with Loops

    Full text link
    It is well known that an arbitrary graphical model of statistical inference defined on a tree, i.e. on a graph without loops, is solved exactly and efficiently by an iterative Belief Propagation (BP) algorithm convergent to unique minimum of the so-called Bethe free energy functional. For a general graphical model on a loopy graph the functional may show multiple minima, the iterative BP algorithm may converge to one of the minima or may not converge at all, and the global minimum of the Bethe free energy functional is not guaranteed to correspond to the optimal Maximum-Likelihood (ML) solution in the zero-temperature limit. However, there are exceptions to this general rule, discussed in \cite{05KW} and \cite{08BSS} in two different contexts, where zero-temperature version of the BP algorithm finds ML solution for special models on graphs with loops. These two models share a key feature: their ML solutions can be found by an efficient Linear Programming (LP) algorithm with a Totally-Uni-Modular (TUM) matrix of constraints. Generalizing the two models we consider a class of graphical models reducible in the zero temperature limit to LP with TUM constraints. Assuming that a gedanken algorithm, g-BP, funding the global minimum of the Bethe free energy is available we show that in the limit of zero temperature g-BP outputs the ML solution. Our consideration is based on equivalence established between gapless Linear Programming (LP) relaxation of the graphical model in the T→0T\to 0 limit and respective LP version of the Bethe-Free energy minimization.Comment: 12 pages, 1 figure, submitted to JSTA
    • …
    corecore