44,740 research outputs found
Sequence mixed graphs
A mixed graph can be seen as a type of digraph containing some edges (or two opposite arcs). Here we introduce the concept of sequence mixed graphs, which is a generalization of both sequence graphs and literated line digraphs. These structures are proven to be useful in the problem of constructing dense graphs or digraphs, and this is related to the degree/diameter problem. Thus, our generalized approach gives rise to graphs that have also good ratio order/diameter. Moreover, we propose a general method for obtaining a sequence mixed diagraph by identifying some vertices of certain iterated line digraph. As a consequence, some results about distance-related parameters (mainly, the diameter and the average distance) of sequence mixed graphs are presented.Postprint (author's final draft
A survey of statistical network models
Networks are ubiquitous in science and have become a focal point for
discussion in everyday life. Formal statistical models for the analysis of
network data have emerged as a major topic of interest in diverse areas of
study, and most of these involve a form of graphical representation.
Probability models on graphs date back to 1959. Along with empirical studies in
social psychology and sociology from the 1960s, these early works generated an
active network community and a substantial literature in the 1970s. This effort
moved into the statistical literature in the late 1970s and 1980s, and the past
decade has seen a burgeoning network literature in statistical physics and
computer science. The growth of the World Wide Web and the emergence of online
networking communities such as Facebook, MySpace, and LinkedIn, and a host of
more specialized professional network communities has intensified interest in
the study of networks and network data. Our goal in this review is to provide
the reader with an entry point to this burgeoning literature. We begin with an
overview of the historical development of statistical network modeling and then
we introduce a number of examples that have been studied in the network
literature. Our subsequent discussion focuses on a number of prominent static
and dynamic network models and their interconnections. We emphasize formal
model descriptions, and pay special attention to the interpretation of
parameters and their estimation. We end with a description of some open
problems and challenges for machine learning and statistics.Comment: 96 pages, 14 figures, 333 reference
Model selection and hypothesis testing for large-scale network models with overlapping groups
The effort to understand network systems in increasing detail has resulted in
a diversity of methods designed to extract their large-scale structure from
data. Unfortunately, many of these methods yield diverging descriptions of the
same network, making both the comparison and understanding of their results a
difficult challenge. A possible solution to this outstanding issue is to shift
the focus away from ad hoc methods and move towards more principled approaches
based on statistical inference of generative models. As a result, we face
instead the more well-defined task of selecting between competing generative
processes, which can be done under a unified probabilistic framework. Here, we
consider the comparison between a variety of generative models including
features such as degree correction, where nodes with arbitrary degrees can
belong to the same group, and community overlap, where nodes are allowed to
belong to more than one group. Because such model variants possess an
increasing number of parameters, they become prone to overfitting. In this
work, we present a method of model selection based on the minimum description
length criterion and posterior odds ratios that is capable of fully accounting
for the increased degrees of freedom of the larger models, and selects the best
one according to the statistical evidence available in the data. In applying
this method to many empirical unweighted networks from different fields, we
observe that community overlap is very often not supported by statistical
evidence and is selected as a better model only for a minority of them. On the
other hand, we find that degree correction tends to be almost universally
favored by the available data, implying that intrinsic node proprieties (as
opposed to group properties) are often an essential ingredient of network
formation.Comment: 20 pages,7 figures, 1 tabl
Modeling heterogeneity in random graphs through latent space models: a selective review
We present a selective review on probabilistic modeling of heterogeneity in
random graphs. We focus on latent space models and more particularly on
stochastic block models and their extensions that have undergone major
developments in the last five years
Centrality metrics and localization in core-periphery networks
Two concepts of centrality have been defined in complex networks. The first
considers the centrality of a node and many different metrics for it has been
defined (e.g. eigenvector centrality, PageRank, non-backtracking centrality,
etc). The second is related to a large scale organization of the network, the
core-periphery structure, composed by a dense core plus an outlying and
loosely-connected periphery. In this paper we investigate the relation between
these two concepts. We consider networks generated via the Stochastic Block
Model, or its degree corrected version, with a strong core-periphery structure
and we investigate the centrality properties of the core nodes and the ability
of several centrality metrics to identify them. We find that the three measures
with the best performance are marginals obtained with belief propagation,
PageRank, and degree centrality, while non-backtracking and eigenvector
centrality (or MINRES}, showed to be equivalent to the latter in the large
network limit) perform worse in the investigated networks.Comment: 15 pages, 8 figure
Decoherence in quantum walks - a review
The development of quantum walks in the context of quantum computation, as
generalisations of random walk techniques, led rapidly to several new quantum
algorithms. These all follow unitary quantum evolution, apart from the final
measurement. Since logical qubits in a quantum computer must be protected from
decoherence by error correction, there is no need to consider decoherence at
the level of algorithms. Nonetheless, enlarging the range of quantum dynamics
to include non-unitary evolution provides a wider range of possibilities for
tuning the properties of quantum walks. For example, small amounts of
decoherence in a quantum walk on the line can produce more uniform spreading (a
top-hat distribution), without losing the quantum speed up. This paper reviews
the work on decoherence, and more generally on non-unitary evolution, in
quantum walks and suggests what future questions might prove interesting to
pursue in this area.Comment: 52 pages, invited review, v2 & v3 updates to include significant work
since first posted and corrections from comments received; some non-trivial
typos fixed. Comments now limited to changes that can be applied at proof
stag
- …