98,506 research outputs found
Iterative Maximum Likelihood on Networks
We consider n agents located on the vertices of a connected graph. Each agent
v receives a signal X_v(0)~N(s, 1) where s is an unknown quantity. A natural
iterative way of estimating s is to perform the following procedure. At
iteration t + 1 let X_v(t + 1) be the average of X_v(t) and of X_w(t) among all
the neighbors w of v.
In this paper we consider a variant of simple iterative averaging, which
models "greedy" behavior of the agents. At iteration t, each agent v declares
the value of its estimator X_v(t) to all of its neighbors. Then, it updates
X_v(t + 1) by taking the maximum likelihood (or minimum variance) estimator of
s, given X_v(t) and X_w(t) for all neighbors w of v, and the structure of the
graph.
We give an explicit efficient procedure for calculating X_v(t), study the
convergence of the process as t goes to infinity and show that if the limit
exists then it is the same for all v and w. For graphs that are symmetric under
actions of transitive groups, we show that the process is efficient. Finally,
we show that the greedy process is in some cases more efficient than simple
averaging, while in other cases the converse is true, so that, in this model,
"greed" of the individual agents may or may not have an adverse affect on the
outcome.
The model discussed here may be viewed as the Maximum-Likelihood version of
models studied in Bayesian Economics. The ML variant is more accessible and
allows in particular to show the significance of symmetry in the efficiency of
estimators using networks of agents.Comment: 13 pages, two figure
Distributed learning of Gaussian graphical models via marginal likelihoods
We consider distributed estimation of the inverse covariance matrix, also called the concentration matrix, in Gaussian graphical models. Traditional centralized estimation often requires iterative and expensive global inference and is therefore difficult in large distributed networks. In this paper, we propose a general framework for distributed estimation based on a maximum marginal likelihood (MML) approach. Each node independently computes a local estimate by maximizing a marginal likelihood defined with respect to data collected from its local neighborhood. Due to the non-convexity of the MML problem, we derive and consider solving a convex relaxation. The local estimates are then combined into a global estimate without the need for iterative message-passing between neighborhoods. We prove that this relaxed MML estimator is asymptotically consistent. Through numerical experiments on several synthetic and real-world data sets, we demonstrate that the two-hop version of the proposed estimator is significantly better than the one-hop version, and nearly closes the gap to the centralized maximum likelihood estimator in many situations.
On the relationship between Gaussian stochastic blockmodels and label propagation algorithms
The problem of community detection receives great attention in recent years.
Many methods have been proposed to discover communities in networks. In this
paper, we propose a Gaussian stochastic blockmodel that uses Gaussian
distributions to fit weight of edges in networks for non-overlapping community
detection. The maximum likelihood estimation of this model has the same
objective function as general label propagation with node preference. The node
preference of a specific vertex turns out to be a value proportional to the
intra-community eigenvector centrality (the corresponding entry in principal
eigenvector of the adjacency matrix of the subgraph inside that vertex's
community) under maximum likelihood estimation. Additionally, the maximum
likelihood estimation of a constrained version of our model is highly related
to another extension of label propagation algorithm, namely, the label
propagation algorithm under constraint. Experiments show that the proposed
Gaussian stochastic blockmodel performs well on various benchmark networks.Comment: 22 pages, 17 figure
- …