9 research outputs found
Super-resolution community detection for layer-aggregated multilayer networks
Applied network science often involves preprocessing network data before
applying a network-analysis method, and there is typically a theoretical
disconnect between these steps. For example, it is common to aggregate
time-varying network data into windows prior to analysis, and the tradeoffs of
this preprocessing are not well understood. Focusing on the problem of
detecting small communities in multilayer networks, we study the effects of
layer aggregation by developing random-matrix theory for modularity matrices
associated with layer-aggregated networks with nodes and layers, which
are drawn from an ensemble of Erd\H{o}s-R\'enyi networks. We study phase
transitions in which eigenvectors localize onto communities (allowing their
detection) and which occur for a given community provided its size surpasses a
detectability limit . When layers are aggregated via a summation, we
obtain , where is the number of
layers across which the community persists. Interestingly, if is allowed to
vary with then summation-based layer aggregation enhances small-community
detection even if the community persists across a vanishing fraction of layers,
provided that decays more slowly than . Moreover,
we find that thresholding the summation can in some cases cause to decay
exponentially, decreasing by orders of magnitude in a phenomenon we call
super-resolution community detection. That is, layer aggregation with
thresholding is a nonlinear data filter enabling detection of communities that
are otherwise too small to detect. Importantly, different thresholds generally
enhance the detectability of communities having different properties,
illustrating that community detection can be obscured if one analyzes network
data using a single threshold.Comment: 11 pages, 8 figure
Super-Resolution Community Detection for Layer-Aggregated Multilayer Networks
Applied network science often involves preprocessing network data before applying a network-analysis method, and there is typically a theoretical disconnect between these steps. For example, it is common to aggregate time-varying network data into windows prior to analysis, and the trade-offs of this preprocessing are not well understood. Focusing on the problem of detecting small communities in multilayer networks, we study the effects of layer aggregation by developing random-matrix theory for modularity matrices associated with layer-aggregated networks with N nodes and L layers, which are drawn from an ensemble of Erdős–Rényi networks with communities planted in subsets of layers. We study phase transitions in which eigenvectors localize onto communities (allowing their detection) and which occur for a given community provided its size surpasses a detectability limit K*. When layers are aggregated via a summation, we obtain K∗∝O(NL/T), where T is the number of layers across which the community persists. Interestingly, if T is allowed to vary with L, then summation-based layer aggregation enhances small-community detection even if the community persists across a vanishing fraction of layers, provided that T/L decays more slowly than (L−1/2). Moreover, we find that thresholding the summation can, in some cases, cause K* to decay exponentially, decreasing by orders of magnitude in a phenomenon we call super-resolution community detection. In other words, layer aggregation with thresholding is a nonlinear data filter enabling detection of communities that are otherwise too small to detect. Importantly, different thresholds generally enhance the detectability of communities having different properties, illustrating that community detection can be obscured if one analyzes network data using a single threshold
GRASP: Accelerating Shortest Path Attacks via Graph Attention
Recent advances in machine learning (ML) have shown promise in aiding and
accelerating classical combinatorial optimization algorithms. ML-based speed
ups that aim to learn in an end to end manner (i.e., directly output the
solution) tend to trade off run time with solution quality. Therefore,
solutions that are able to accelerate existing solvers while maintaining their
performance guarantees, are of great interest. We consider an APX-hard problem,
where an adversary aims to attack shortest paths in a graph by removing the
minimum number of edges. We propose the GRASP algorithm: Graph Attention
Accelerated Shortest Path Attack, an ML aided optimization algorithm that
achieves run times up to 10x faster, while maintaining the quality of solution
generated. GRASP uses a graph attention network to identify a smaller subgraph
containing the combinatorial solution, thus effectively reducing the input
problem size. Additionally, we demonstrate how careful representation of the
input graph, including node features that correlate well with the optimization
task, can highlight important structure in the optimization solution
Selective network discovery via deep reinforcement learning on embedded spaces
Abstract
Complex networks are often either too large for full exploration, partially accessible, or partially observed. Downstream learning tasks on these incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and nontrivial. As a result, network discovery algorithms optimized for specific downstream learning tasks given resource collection constraints are of great interest. In this paper, we formulate the task-specific network discovery problem as a sequential decision-making problem. Our downstream task is selective harvesting, the optimal collection of vertices with a particular attribute. We propose a framework, called network actor critic (NAC), which learns a policy and notion of future reward in an offline setting via a deep reinforcement learning algorithm. The NAC paradigm utilizes a task-specific network embedding to reduce the state space complexity. A detailed comparative analysis of popular network embeddings is presented with respect to their role in supporting offline planning. Furthermore, a quantitative study is presented on various synthetic and real benchmarks using NAC and several baselines. We show that offline models of reward and network discovery policies lead to significantly improved performance when compared to competitive online discovery algorithms. Finally, we outline learning regimes where planning is critical in addressing sparse and changing reward signals