958 research outputs found
Sufficient conditions for convergence of the Sum-Product Algorithm
We derive novel conditions that guarantee convergence of the Sum-Product
algorithm (also known as Loopy Belief Propagation or simply Belief Propagation)
to a unique fixed point, irrespective of the initial messages. The
computational complexity of the conditions is polynomial in the number of
variables. In contrast with previously existing conditions, our results are
directly applicable to arbitrary factor graphs (with discrete variables) and
are shown to be valid also in the case of factors containing zeros, under some
additional conditions. We compare our bounds with existing ones, numerically
and, if possible, analytically. For binary variables with pairwise
interactions, we derive sufficient conditions that take into account local
evidence (i.e., single variable factors) and the type of pair interactions
(attractive or repulsive). It is shown empirically that this bound outperforms
existing bounds.Comment: 15 pages, 5 figures. Major changes and new results in this revised
version. Submitted to IEEE Transactions on Information Theor
Truncating the loop series expansion for Belief Propagation
Recently, M. Chertkov and V.Y. Chernyak derived an exact expression for the
partition sum (normalization constant) corresponding to a graphical model,
which is an expansion around the Belief Propagation solution. By adding
correction terms to the BP free energy, one for each "generalized loop" in the
factor graph, the exact partition sum is obtained. However, the usually
enormous number of generalized loops generally prohibits summation over all
correction terms. In this article we introduce Truncated Loop Series BP
(TLSBP), a particular way of truncating the loop series of M. Chertkov and V.Y.
Chernyak by considering generalized loops as compositions of simple loops. We
analyze the performance of TLSBP in different scenarios, including the Ising
model, regular random graphs and on Promedas, a large probabilistic medical
diagnostic system. We show that TLSBP often improves upon the accuracy of the
BP solution, at the expense of increased computation time. We also show that
the performance of TLSBP strongly depends on the degree of interaction between
the variables. For weak interactions, truncating the series leads to
significant improvements, whereas for strong interactions it can be
ineffective, even if a high number of terms is considered.Comment: 31 pages, 12 figures, submitted to Journal of Machine Learning
Researc
Modeling the structure and evolution of discussion cascades
We analyze the structure and evolution of discussion cascades in four popular
websites: Slashdot, Barrapunto, Meneame and Wikipedia. Despite the big
heterogeneities between these sites, a preferential attachment (PA) model with
bias to the root can capture the temporal evolution of the observed trees and
many of their statistical properties, namely, probability distributions of the
branching factors (degrees), subtree sizes and certain correlations. The
parameters of the model are learned efficiently using a novel maximum
likelihood estimation scheme for PA and provide a figurative interpretation
about the communication habits and the resulting discussion cascades on the
four different websites.Comment: 10 pages, 11 figure
- …