6,433 research outputs found
Distributed Strong Diameter Network Decomposition
For a pair of positive parameters , a partition of the
vertex set of an -vertex graph into disjoint clusters of
diameter at most each is called a network decomposition, if the
supergraph , obtained by contracting each of the clusters
of , can be properly -colored. The decomposition is
said to be strong (resp., weak) if each of the clusters has strong (resp.,
weak) diameter at most , i.e., if for every cluster and
every two vertices , the distance between them in the induced graph
of (resp., in ) is at most .
Network decomposition is a powerful construct, very useful in distributed
computing and beyond. It was shown by Awerbuch \etal \cite{AGLP89} and
Panconesi and Srinivasan \cite{PS92}, that strong network decompositions can be computed in
distributed time. Linial and Saks \cite{LS93} devised an
ingenious randomized algorithm that constructs {\em weak} network decompositions in time. It was however open till now
if {\em strong} network decompositions with both parameters can be constructed in distributed time.
In this paper we answer this long-standing open question in the affirmative,
and show that strong network decompositions can be
computed in time. We also present a tradeoff between parameters
of our network decomposition. Our work is inspired by and relies on the
"shifted shortest path approach", due to Blelloch \etal \cite{BGKMPT11}, and
Miller \etal \cite{MPX13}. These authors developed this approach for PRAM
algorithms for padded partitions. We adapt their approach to network
decompositions in the distributed model of computation
On Derandomizing Local Distributed Algorithms
The gap between the known randomized and deterministic local distributed
algorithms underlies arguably the most fundamental and central open question in
distributed graph algorithms. In this paper, we develop a generic and clean
recipe for derandomizing LOCAL algorithms. We also exhibit how this simple
recipe leads to significant improvements on a number of problem. Two main
results are:
- An improved distributed hypergraph maximal matching algorithm, improving on
Fischer, Ghaffari, and Kuhn [FOCS'17], and giving improved algorithms for
edge-coloring, maximum matching approximation, and low out-degree edge
orientation. The first gives an improved algorithm for Open Problem 11.4 of the
book of Barenboim and Elkin, and the last gives the first positive resolution
of their Open Problem 11.10.
- An improved distributed algorithm for the Lov\'{a}sz Local Lemma, which
gets closer to a conjecture of Chang and Pettie [FOCS'17], and moreover leads
to improved distributed algorithms for problems such as defective coloring and
-SAT.Comment: 37 page
On Strong Diameter Padded Decompositions
Given a weighted graph G=(V,E,w), a partition of V is Delta-bounded if the diameter of each cluster is bounded by Delta. A distribution over Delta-bounded partitions is a beta-padded decomposition if every ball of radius gamma Delta is contained in a single cluster with probability at least e^{-beta * gamma}. The weak diameter of a cluster C is measured w.r.t. distances in G, while the strong diameter is measured w.r.t. distances in the induced graph G[C]. The decomposition is weak/strong according to the diameter guarantee.
Formerly, it was proven that K_r free graphs admit weak decompositions with padding parameter O(r), while for strong decompositions only O(r^2) padding parameter was known. Furthermore, for the case of a graph G, for which the induced shortest path metric d_G has doubling dimension ddim, a weak O(ddim)-padded decomposition was constructed, which is also known to be tight. For the case of strong diameter, nothing was known.
We construct strong O(r)-padded decompositions for K_r free graphs, matching the state of the art for weak decompositions. Similarly, for graphs with doubling dimension ddim we construct a strong O(ddim)-padded decomposition, which is also tight. We use this decomposition to construct (O(ddim),O~(ddim))-sparse cover scheme for such graphs. Our new decompositions and cover have implications to approximating unique games, the construction of light and sparse spanners, and for path reporting distance oracles
Distributed Connectivity Decomposition
We present time-efficient distributed algorithms for decomposing graphs with
large edge or vertex connectivity into multiple spanning or dominating trees,
respectively. As their primary applications, these decompositions allow us to
achieve information flow with size close to the connectivity by parallelizing
it along the trees. More specifically, our distributed decomposition algorithms
are as follows:
(I) A decomposition of each undirected graph with vertex-connectivity
into (fractionally) vertex-disjoint weighted dominating trees with total weight
, in rounds.
(II) A decomposition of each undirected graph with edge-connectivity
into (fractionally) edge-disjoint weighted spanning trees with total
weight , in
rounds.
We also show round complexity lower bounds of
and
for the above two decompositions,
using techniques of [Das Sarma et al., STOC'11]. Moreover, our
vertex-connectivity decomposition extends to centralized algorithms and
improves the time complexity of [Censor-Hillel et al., SODA'14] from
to near-optimal .
As corollaries, we also get distributed oblivious routing broadcast with
-competitive edge-congestion and -competitive
vertex-congestion. Furthermore, the vertex connectivity decomposition leads to
near-time-optimal -approximation of vertex connectivity: centralized
and distributed . The former moves
toward the 1974 conjecture of Aho, Hopcroft, and Ullman postulating an
centralized exact algorithm while the latter is the first distributed vertex
connectivity approximation
4.45 Pflops Astrophysical N-Body Simulation on K computer -- The Gravitational Trillion-Body Problem
As an entry for the 2012 Gordon-Bell performance prize, we report performance
results of astrophysical N-body simulations of one trillion particles performed
on the full system of K computer. This is the first gravitational trillion-body
simulation in the world. We describe the scientific motivation, the numerical
algorithm, the parallelization strategy, and the performance analysis. Unlike
many previous Gordon-Bell prize winners that used the tree algorithm for
astrophysical N-body simulations, we used the hybrid TreePM method, for similar
level of accuracy in which the short-range force is calculated by the tree
algorithm, and the long-range force is solved by the particle-mesh algorithm.
We developed a highly-tuned gravity kernel for short-range forces, and a novel
communication algorithm for long-range forces. The average performance on 24576
and 82944 nodes of K computer are 1.53 and 4.45 Pflops, which correspond to 49%
and 42% of the peak speed.Comment: 10 pages, 6 figures, Proceedings of Supercomputing 2012
(http://sc12.supercomputing.org/), Gordon Bell Prize Winner. Additional
information is http://www.ccs.tsukuba.ac.jp/CCS/eng/gbp201
Guest Editorial: Nonlinear Optimization of Communication Systems
Linear programming and other classical optimization techniques have found important applications in communication systems for many decades. Recently, there has been a surge in research activities that utilize the latest developments in nonlinear optimization to tackle a much wider scope of work in the analysis and design of communication systems. These activities involve every “layer” of the protocol stack and the principles of layered network architecture itself, and have made intellectual and practical impacts significantly beyond the established frameworks of optimization of communication systems in the early 1990s. These recent results are driven by new demands in the areas of communications and networking, as well as new tools emerging from optimization theory. Such tools include the powerful theories and highly efficient computational algorithms for nonlinear convex optimization, together with global solution methods and relaxation techniques for nonconvex optimization
Space and Time Efficient Parallel Graph Decomposition, Clustering, and Diameter Approximation
We develop a novel parallel decomposition strategy for unweighted, undirected
graphs, based on growing disjoint connected clusters from batches of centers
progressively selected from yet uncovered nodes. With respect to similar
previous decompositions, our strategy exercises a tighter control on both the
number of clusters and their maximum radius.
We present two important applications of our parallel graph decomposition:
(1) -center clustering approximation; and (2) diameter approximation. In
both cases, we obtain algorithms which feature a polylogarithmic approximation
factor and are amenable to a distributed implementation that is geared for
massive (long-diameter) graphs. The total space needed for the computation is
linear in the problem size, and the parallel depth is substantially sublinear
in the diameter for graphs with low doubling dimension. To the best of our
knowledge, ours are the first parallel approximations for these problems which
achieve sub-diameter parallel time, for a relevant class of graphs, using only
linear space. Besides the theoretical guarantees, our algorithms allow for a
very simple implementation on clustered architectures: we report on extensive
experiments which demonstrate their effectiveness and efficiency on large
graphs as compared to alternative known approaches.Comment: 14 page
- …