1,712 research outputs found
Distance-generalized Core Decomposition
The -core of a graph is defined as the maximal subgraph in which every
vertex is connected to at least other vertices within that subgraph. In
this work we introduce a distance-based generalization of the notion of
-core, which we refer to as the -core, i.e., the maximal subgraph in
which every vertex has at least other vertices at distance within
that subgraph. We study the properties of the -core showing that it
preserves many of the nice features of the classic core decomposition (e.g.,
its connection with the notion of distance-generalized chromatic number) and it
preserves its usefulness to speed-up or approximate distance-generalized
notions of dense structures, such as -club.
Computing the distance-generalized core decomposition over large networks is
intrinsically complex. However, by exploiting clever upper and lower bounds we
can partition the computation in a set of totally independent subcomputations,
opening the door to top-down exploration and to multithreading, and thus
achieving an efficient algorithm
An efficient and principled method for detecting communities in networks
A fundamental problem in the analysis of network data is the detection of
network communities, groups of densely interconnected nodes, which may be
overlapping or disjoint. Here we describe a method for finding overlapping
communities based on a principled statistical approach using generative network
models. We show how the method can be implemented using a fast, closed-form
expectation-maximization algorithm that allows us to analyze networks of
millions of nodes in reasonable running times. We test the method both on
real-world networks and on synthetic benchmarks and find that it gives results
competitive with previous methods. We also show that the same approach can be
used to extract nonoverlapping community divisions via a relaxation method, and
demonstrate that the algorithm is competitively fast and accurate for the
nonoverlapping problem.Comment: 14 pages, 5 figures, 1 tabl
Bayesian model selection for exponential random graph models via adjusted pseudolikelihoods
Models with intractable likelihood functions arise in areas including network
analysis and spatial statistics, especially those involving Gibbs random
fields. Posterior parameter es timation in these settings is termed a
doubly-intractable problem because both the likelihood function and the
posterior distribution are intractable. The comparison of Bayesian models is
often based on the statistical evidence, the integral of the un-normalised
posterior distribution over the model parameters which is rarely available in
closed form. For doubly-intractable models, estimating the evidence adds
another layer of difficulty. Consequently, the selection of the model that best
describes an observed network among a collection of exponential random graph
models for network analysis is a daunting task. Pseudolikelihoods offer a
tractable approximation to the likelihood but should be treated with caution
because they can lead to an unreasonable inference. This paper specifies a
method to adjust pseudolikelihoods in order to obtain a reasonable, yet
tractable, approximation to the likelihood. This allows implementation of
widely used computational methods for evidence estimation and pursuit of
Bayesian model selection of exponential random graph models for the analysis of
social networks. Empirical comparisons to existing methods show that our
procedure yields similar evidence estimates, but at a lower computational cost.Comment: Supplementary material attached. To view attachments, please download
and extract the gzzipped source file listed under "Other formats
- …