1,698 research outputs found
Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable
There has been significant recent interest in parallel graph processing due
to the need to quickly analyze the large graphs available today. Many graph
codes have been designed for distributed memory or external memory. However,
today even the largest publicly-available real-world graph (the Hyperlink Web
graph with over 3.5 billion vertices and 128 billion edges) can fit in the
memory of a single commodity multicore server. Nevertheless, most experimental
work in the literature report results on much smaller graphs, and the ones for
the Hyperlink graph use distributed or external memory. Therefore, it is
natural to ask whether we can efficiently solve a broad class of graph problems
on this graph in memory.
This paper shows that theoretically-efficient parallel graph algorithms can
scale to the largest publicly-available graphs using a single machine with a
terabyte of RAM, processing them in minutes. We give implementations of
theoretically-efficient parallel algorithms for 20 important graph problems. We
also present the optimizations and techniques that we used in our
implementations, which were crucial in enabling us to process these large
graphs quickly. We show that the running times of our implementations
outperform existing state-of-the-art implementations on the largest real-world
graphs. For many of the problems that we consider, this is the first time they
have been solved on graphs at this scale. We have made the implementations
developed in this work publicly-available as the Graph-Based Benchmark Suite
(GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
Equivalence Classes and Conditional Hardness in Massively Parallel Computations
The Massively Parallel Computation (MPC) model serves as a common abstraction of many modern large-scale data processing frameworks, and has been receiving increasingly more attention over the past few years, especially in the context of classical graph problems. So far, the only way to argue lower bounds for this model is to condition on conjectures about the hardness of some specific problems, such as graph connectivity on promise graphs that are either one cycle or two cycles, usually called the one cycle vs. two cycles problem. This is unlike the traditional arguments based on conjectures about complexity classes (e.g., P ? NP), which are often more robust in the sense that refuting them would lead to groundbreaking algorithms for a whole bunch of problems.
In this paper we present connections between problems and classes of problems that allow the latter type of arguments. These connections concern the class of problems solvable in a sublogarithmic amount of rounds in the MPC model, denoted by MPC(o(log N)), and some standard classes concerning space complexity, namely L and NL, and suggest conjectures that are robust in the sense that refuting them would lead to many surprisingly fast new algorithms in the MPC model. We also obtain new conditional lower bounds, and prove new reductions and equivalences between problems in the MPC model
Optimized network structure and routing metric in wireless multihop ad hoc communication
Inspired by the Statistical Physics of complex networks, wireless multihop ad
hoc communication networks are considered in abstracted form. Since such
engineered networks are able to modify their structure via topology control, we
search for optimized network structures, which maximize the end-to-end
throughput performance. A modified version of betweenness centrality is
introduced and shown to be very relevant for the respective modeling. The
calculated optimized network structures lead to a significant increase of the
end-to-end throughput. The discussion of the resulting structural properties
reveals that it will be almost impossible to construct these optimized
topologies in a technologically efficient distributive manner. However, the
modified betweenness centrality also allows to propose a new routing metric for
the end-to-end communication traffic. This approach leads to an even larger
increase of throughput capacity and is easily implementable in a
technologically relevant manner.Comment: 25 pages, v2: fixed one small typo in the 'authors' fiel
Distributed estimation and control of node centrality in undirected asymmetric networks
Measures of node centrality that describe the importance of a node within a
network are crucial for understanding the behavior of social networks and
graphs. In this paper, we address the problems of distributed estimation and
control of node centrality in undirected graphs with asymmetric weight values.
In particular, we focus our attention on -centrality, which can be seen
as a generalization of eigenvector centrality. In this setting, we first
consider a distributed protocol where agents compute their -centrality,
focusing on the convergence properties of the method; then, we combine the
estimation method with a consensus algorithm to achieve a consensus value
weighted by the influence of each node in the network. Finally, we formulate an
-centrality control problem which is naturally decoupled and, thus,
suitable for a distributed setting and we apply this formulation to protect the
most valuable nodes in a network against a targeted attack, by making every
node in the network equally important in terms of {\alpha}-centrality.
Simulations results are provided to corroborate the theoretical findings.Comment: published on IEEE Transactions on Automatic Control
https://ieeexplore.ieee.org/abstract/document/912618
- …