15,341 research outputs found

    Cluster Variation Method in Statistical Physics and Probabilistic Graphical Models

    Full text link
    The cluster variation method (CVM) is a hierarchy of approximate variational techniques for discrete (Ising--like) models in equilibrium statistical mechanics, improving on the mean--field approximation and the Bethe--Peierls approximation, which can be regarded as the lowest level of the CVM. In recent years it has been applied both in statistical physics and to inference and optimization problems formulated in terms of probabilistic graphical models. The foundations of the CVM are briefly reviewed, and the relations with similar techniques are discussed. The main properties of the method are considered, with emphasis on its exactness for particular models and on its asymptotic properties. The problem of the minimization of the variational free energy, which arises in the CVM, is also addressed, and recent results about both provably convergent and message-passing algorithms are discussed.Comment: 36 pages, 17 figure

    Loopy belief propagation and probabilistic image processing

    Get PDF
    Estimation of hyperparameters by maximization of the marginal likelihood in probabilistic image processing is investigated by using the cluster variation method. The algorithms are substantially equivalent to generalized loopy belief propagation

    Cluster Expansion Method for Evolving Weighted Networks Having Vector-like Nodes

    Full text link
    The Cluster Variation Method known in statistical mechanics and condensed matter is revived for weighted bipartite networks. The decomposition of a Hamiltonian through a finite number of components, whence serving to define variable clusters, is recalled. As an illustration the network built from data representing correlations between (4) macro-economic features, i.e. the so called vectorvector componentscomponents, of 15 EU countries, as (function) nodes, is discussed. We show that statistical physics principles, like the maximum entropy criterion points to clusters, here in a (4) variable phase space: Gross Domestic Product (GDP), Final Consumption Expenditure (FCE), Gross Capital Formation (GCF) and Net Exports (NEX). It is observed that the maximummaximum entropy corresponds to a cluster which does notnot explicitly include the GDP but only the other (3) ''axes'', i.e. consumption, investment and trade components. On the other hand, the minimalminimal entropy clustering scheme is obtained from a coupling necessarily including GDP and FCE. The results confirm intuitive economic theory and practice expectations at least as regards geographical connexions. The technique can of course be applied to many other cases in the physics of socio-economy networks.Comment: 7 pages, 2 figures, 20 references, 3 tables, submitted to FENS 07 Proceeding

    Loop-corrected belief propagation for lattice spin models

    Full text link
    Belief propagation (BP) is a message-passing method for solving probabilistic graphical models. It is very successful in treating disordered models (such as spin glasses) on random graphs. On the other hand, finite-dimensional lattice models have an abundant number of short loops, and the BP method is still far from being satisfactory in treating the complicated loop-induced correlations in these systems. Here we propose a loop-corrected BP method to take into account the effect of short loops in lattice spin models. We demonstrate, through an application to the square-lattice Ising model, that loop-corrected BP improves over the naive BP method significantly. We also implement loop-corrected BP at the coarse-grained region graph level to further boost its performance.Comment: 11 pages, minor changes with new references added. Final version as published in EPJ

    Deep learning systems as complex networks

    Full text link
    Thanks to the availability of large scale digital datasets and massive amounts of computational power, deep learning algorithms can learn representations of data by exploiting multiple levels of abstraction. These machine learning methods have greatly improved the state-of-the-art in many challenging cognitive tasks, such as visual object recognition, speech processing, natural language understanding and automatic translation. In particular, one class of deep learning models, known as deep belief networks, can discover intricate statistical structure in large data sets in a completely unsupervised fashion, by learning a generative model of the data using Hebbian-like learning mechanisms. Although these self-organizing systems can be conveniently formalized within the framework of statistical mechanics, their internal functioning remains opaque, because their emergent dynamics cannot be solved analytically. In this article we propose to study deep belief networks using techniques commonly employed in the study of complex networks, in order to gain some insights into the structural and functional properties of the computational graph resulting from the learning process.Comment: 20 pages, 9 figure
    corecore