1 research outputs found
Entropy of labeled versus unlabeled networks
The structure of a network is an unlabeled graph, yet graphs in most models
of complex networks are labeled by meaningless random integers. Is the
associated labeling noise always negligible, or can it overpower the
network-structural signal? To address this question, we introduce and consider
the sparse unlabeled versions of popular network models, and compare their
entropy against the original labeled versions. We show that labeled and
unlabeled Erdos-Renyi graphs are entropically equivalent, even though their
degree distributions are very different. The labeled and unlabeled versions of
the configuration model may have different prefactors in their leading entropy
terms, although this remains conjectural. Our main results are upper and lower
bounds for the entropy of labeled and unlabeled one-dimensional random
geometric graphs. We show that their unlabeled entropy is negligible in
comparison with the labeled entropy. These results imply that in sparse
networks the entropy of meaningless labeling may dominate the entropy of the
network structure, suggesting a need for a thorough reexamination of the
statistical foundations of network modeling