3,114 research outputs found

    From Graph Theory to Network Science: The Natural Emergence of Hyperbolicity (Tutorial)

    Get PDF
    Network science is driven by the question which properties large real-world networks have and how we can exploit them algorithmically. In the past few years, hyperbolic graphs have emerged as a very promising model for scale-free networks. The connection between hyperbolic geometry and complex networks gives insights in both directions: (1) Hyperbolic geometry forms the basis of a natural and explanatory model for real-world networks. Hyperbolic random graphs are obtained by choosing random points in the hyperbolic plane and connecting pairs of points that are geometrically close. The resulting networks share many structural properties for example with online social networks like Facebook or Twitter. They are thus well suited for algorithmic analyses in a more realistic setting. (2) Starting with a real-world network, hyperbolic geometry is well-suited for metric embeddings. The vertices of a network can be mapped to points in this geometry, such that geometric distances are similar to graph distances. Such embeddings have a variety of algorithmic applications ranging from approximations based on efficient geometric algorithms to greedy routing solely using hyperbolic coordinates for navigation decisions

    Efficiently generating geometric inhomogeneous and hyperbolic random graphs

    Get PDF
    Hyperbolic random graphs (HRGs) and geometric inhomogeneous random graphs (GIRGs) are two similar generative network models that were designed to resemble complex real-world networks. In particular, they have a power-law degree distribution with controllable exponent ββ and high clustering that can be controlled via the temperature TT. We present the first implementation of an efficient GIRG generator running in expected linear time. Besides varying temperatures, it also supports underlying geometries of higher dimensions. It is capable of generating graphs with ten million edges in under a second on commodity hardware. The algorithm can be adapted to HRGs. Our resulting implementation is the fastest sequential HRG generator, despite the fact that we support non-zero temperatures. Though non-zero temperatures are crucial for many applications, most existing generators are restricted to T=0T=0. We also support parallelization, although this is not the focus of this paper. Moreover, we note that our generators draw from the correct probability distribution, that is, they involve no approximation. Besides the generators themselves, we also provide an efficient algorithm to determine the non-trivial dependency between the average degree of the resulting graph and the input parameters of the GIRG model. This makes it possible to specify the desired expected average degree as input. Moreover, we investigate the differences between HRGs and GIRGs, shedding new light on the nature of the relation between the two models. Although HRGs represent, in a certain sense, a special case of the GIRG model, we find that a straightforward inclusion does not hold in practice. However, the difference is negligible for most use cases

    Practical recommendations for gradient-based training of deep architectures

    Full text link
    Learning algorithms related to artificial neural networks and in particular for Deep Learning may seem to involve many bells and whistles, called hyper-parameters. This chapter is meant as a practical guide with recommendations for some of the most commonly used hyper-parameters, in particular in the context of learning algorithms based on back-propagated gradient and gradient-based optimization. It also discusses how to deal with the fact that more interesting results can be obtained when allowing one to adjust many hyper-parameters. Overall, it describes elements of the practice used to successfully and efficiently train and debug large-scale and often deep multi-layer neural networks. It closes with open questions about the training difficulties observed with deeper architectures

    Algorithms and Software for the Analysis of Large Complex Networks

    Get PDF
    The work presented intersects three main areas, namely graph algorithmics, network science and applied software engineering. Each computational method discussed relates to one of the main tasks of data analysis: to extract structural features from network data, such as methods for community detection; or to transform network data, such as methods to sparsify a network and reduce its size while keeping essential properties; or to realistically model networks through generative models

    A Cosine Rule-Based Discrete Sectional Curvature for Graphs

    Full text link
    How does one generalize differential geometric constructs such as curvature of a manifold to the discrete world of graphs and other combinatorial structures? This problem carries significant importance for analyzing models of discrete spacetime in quantum gravity; inferring network geometry in network science; and manifold learning in data science. The key contribution of this paper is to introduce and validate a new estimator of discrete sectional curvature for random graphs with low metric-distortion. The latter are constructed via a specific graph sprinkling method on different manifolds with constant sectional curvature. We define a notion of metric distortion, which quantifies how well the graph metric approximates the metric of the underlying manifold. We show how graph sprinkling algorithms can be refined to produce hard annulus random geometric graphs with minimal metric distortion. We construct random geometric graphs for spheres, hyperbolic and euclidean planes; upon which we validate our curvature estimator. Numerical analysis reveals that the error of the estimated curvature diminishes as the mean metric distortion goes to zero, thus demonstrating convergence of the estimate. We also perform comparisons to other existing discrete curvature measures. Finally, we demonstrate two practical applications: (i) estimation of the earth's radius using geographical data; and (ii) sectional curvature distributions of self-similar fractals

    The Surprising Power of Graph Neural Networks with Random Node Initialization

    Full text link
    Graph neural networks (GNNs) are effective models for representation learning on relational data. However, standard GNNs are limited in their expressive power, as they cannot distinguish graphs beyond the capability of the Weisfeiler-Leman graph isomorphism heuristic. In order to break this expressiveness barrier, GNNs have been enhanced with random node initialization (RNI), where the idea is to train and run the models with randomized initial node features. In this work, we analyze the expressive power of GNNs with RNI, and prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties. This universality result holds even with partially randomized initial node features, and preserves the invariance properties of GNNs in expectation. We then empirically analyze the effect of RNI on GNNs, based on carefully constructed datasets. Our empirical findings support the superior performance of GNNs with RNI over standard GNNs.Comment: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21). Code and data available at http://www.github.com/ralphabb/GNN-RN
    • …
    corecore