21,471 research outputs found

    Hierarchical core-periphery structure in networks

    Full text link
    We study core-periphery structure in networks using inference methods based on a flexible network model that allows for traditional onion-like cores within cores, but also for hierarchical tree-like structures and more general non-nested types of structure. We propose an efficient Monte Carlo scheme for fitting the model to observed networks and report results for a selection of real-world data sets. Among other things, we observe an empirical distinction between networks showing traditional core-periphery structure with a dense core weakly connected to a sparse periphery, and an alternative structure in which the core is strongly connected both within itself and to the periphery. Networks vary in whether they are better represented by one type of structure or the other. We also observe structures that are a hybrid between core-periphery structure and community structure, in which networks have a set of non-overlapping cores that correspond roughly to communities, surrounded by a single undifferentiated periphery. Computer code implementing our methods is available.Comment: code available: https://github.com/apolanco115/hc

    Core-Periphery Principle Guided Redesign of Self-Attention in Transformers

    Full text link
    Designing more efficient, reliable, and explainable neural network architectures is critical to studies that are based on artificial intelligence (AI) techniques. Previous studies, by post-hoc analysis, have found that the best-performing ANNs surprisingly resemble biological neural networks (BNN), which indicates that ANNs and BNNs may share some common principles to achieve optimal performance in either machine learning or cognitive/behavior tasks. Inspired by this phenomenon, we proactively instill organizational principles of BNNs to guide the redesign of ANNs. We leverage the Core-Periphery (CP) organization, which is widely found in human brain networks, to guide the information communication mechanism in the self-attention of vision transformer (ViT) and name this novel framework as CP-ViT. In CP-ViT, the attention operation between nodes is defined by a sparse graph with a Core-Periphery structure (CP graph), where the core nodes are redesigned and reorganized to play an integrative role and serve as a center for other periphery nodes to exchange information. We evaluated the proposed CP-ViT on multiple public datasets, including medical image datasets (INbreast) and natural image datasets. Interestingly, by incorporating the BNN-derived principle (CP structure) into the redesign of ViT, our CP-ViT outperforms other state-of-the-art ANNs. In general, our work advances the state of the art in three aspects: 1) This work provides novel insights for brain-inspired AI: we can utilize the principles found in BNNs to guide and improve our ANN architecture design; 2) We show that there exist sweet spots of CP graphs that lead to CP-ViTs with significantly improved performance; and 3) The core nodes in CP-ViT correspond to task-related meaningful and important image patches, which can significantly enhance the interpretability of the trained deep model.Comment: Core-periphery, functional brain networks, Vi

    Centrality metrics and localization in core-periphery networks

    Full text link
    Two concepts of centrality have been defined in complex networks. The first considers the centrality of a node and many different metrics for it has been defined (e.g. eigenvector centrality, PageRank, non-backtracking centrality, etc). The second is related to a large scale organization of the network, the core-periphery structure, composed by a dense core plus an outlying and loosely-connected periphery. In this paper we investigate the relation between these two concepts. We consider networks generated via the Stochastic Block Model, or its degree corrected version, with a strong core-periphery structure and we investigate the centrality properties of the core nodes and the ability of several centrality metrics to identify them. We find that the three measures with the best performance are marginals obtained with belief propagation, PageRank, and degree centrality, while non-backtracking and eigenvector centrality (or MINRES}, showed to be equivalent to the latter in the large network limit) perform worse in the investigated networks.Comment: 15 pages, 8 figure
    • …
    corecore