360 research outputs found
Near-Optimal Distributed Approximation of Minimum-Weight Connected Dominating Set
This paper presents a near-optimal distributed approximation algorithm for
the minimum-weight connected dominating set (MCDS) problem. The presented
algorithm finds an approximation in rounds,
where is the network diameter and is the number of nodes.
MCDS is a classical NP-hard problem and the achieved approximation factor
is known to be optimal up to a constant factor, unless P=NP.
Furthermore, the round complexity is known to be
optimal modulo logarithmic factors (for any approximation), following [Das
Sarma et al.---STOC'11].Comment: An extended abstract version of this result appears in the
proceedings of 41st International Colloquium on Automata, Languages, and
Programming (ICALP 2014
Distributed Edge Connectivity in Sublinear Time
We present the first sublinear-time algorithm for a distributed
message-passing network sto compute its edge connectivity exactly in
the CONGEST model, as long as there are no parallel edges. Our algorithm takes
time to compute and a
cut of cardinality with high probability, where and are the
number of nodes and the diameter of the network, respectively, and
hides polylogarithmic factors. This running time is sublinear in (i.e.
) whenever is. Previous sublinear-time
distributed algorithms can solve this problem either (i) exactly only when
[Thurimella PODC'95; Pritchard, Thurimella, ACM
Trans. Algorithms'11; Nanongkai, Su, DISC'14] or (ii) approximately [Ghaffari,
Kuhn, DISC'13; Nanongkai, Su, DISC'14].
To achieve this we develop and combine several new techniques. First, we
design the first distributed algorithm that can compute a -edge connectivity
certificate for any in time .
Second, we show that by combining the recent distributed expander decomposition
technique of [Chang, Pettie, Zhang, SODA'19] with techniques from the
sequential deterministic edge connectivity algorithm of [Kawarabayashi, Thorup,
STOC'15], we can decompose the network into a sublinear number of clusters with
small average diameter and without any mincut separating a cluster (except the
`trivial' ones). Finally, by extending the tree packing technique from [Karger
STOC'96], we can find the minimum cut in time proportional to the number of
components. As a byproduct of this technique, we obtain an -time
algorithm for computing exact minimum cut for weighted graphs.Comment: Accepted at 51st ACM Symposium on Theory of Computing (STOC 2019
Distributed Approximation Algorithms for Weighted Shortest Paths
A distributed network is modeled by a graph having nodes (processors) and
diameter . We study the time complexity of approximating {\em weighted}
(undirected) shortest paths on distributed networks with a {\em
bandwidth restriction} on edges (the standard synchronous \congest model). The
question whether approximation algorithms help speed up the shortest paths
(more precisely distance computation) was raised since at least 2004 by Elkin
(SIGACT News 2004). The unweighted case of this problem is well-understood
while its weighted counterpart is fundamental problem in the area of
distributed approximation algorithms and remains widely open. We present new
algorithms for computing both single-source shortest paths (\sssp) and
all-pairs shortest paths (\apsp) in the weighted case.
Our main result is an algorithm for \sssp. Previous results are the classic
-time Bellman-Ford algorithm and an -time
-approximation algorithm, for any integer
, which follows from the result of Lenzen and Patt-Shamir (STOC 2013).
(Note that Lenzen and Patt-Shamir in fact solve a harder problem, and we use
to hide the O(\poly\log n) term.) We present an -time -approximation algorithm for \sssp. This
algorithm is {\em sublinear-time} as long as is sublinear, thus yielding a
sublinear-time algorithm with almost optimal solution. When is small, our
running time matches the lower bound of by Das Sarma
et al. (SICOMP 2012), which holds even when , up to a
\poly\log n factor.Comment: Full version of STOC 201
Rational Fair Consensus in the GOSSIP Model
The \emph{rational fair consensus problem} can be informally defined as
follows. Consider a network of (selfish) \emph{rational agents}, each of
them initially supporting a \emph{color} chosen from a finite set .
The goal is to design a protocol that leads the network to a stable
monochromatic configuration (i.e. a consensus) such that the probability that
the winning color is is equal to the fraction of the agents that initially
support , for any . Furthermore, this fairness property must
be guaranteed (with high probability) even in presence of any fixed
\emph{coalition} of rational agents that may deviate from the protocol in order
to increase the winning probability of their supported colors. A protocol
having this property, in presence of coalitions of size at most , is said to
be a \emph{whp\,--strong equilibrium}. We investigate, for the first time,
the rational fair consensus problem in the GOSSIP communication model where, at
every round, every agent can actively contact at most one neighbor via a
\emph{pushpull} operation. We provide a randomized GOSSIP protocol that,
starting from any initial color configuration of the complete graph, achieves
rational fair consensus within rounds using messages of
size, w.h.p. More in details, we prove that our protocol is a
whp\,--strong equilibrium for any and, moreover, it
tolerates worst-case permanent faults provided that the number of non-faulty
agents is . As far as we know, our protocol is the first solution
which avoids any all-to-all communication, thus resulting in message
complexity.Comment: Accepted at IPDPS'1
Beyond Geometry : Towards Fully Realistic Wireless Models
Signal-strength models of wireless communications capture the gradual fading
of signals and the additivity of interference. As such, they are closer to
reality than other models. However, nearly all theoretic work in the SINR model
depends on the assumption of smooth geometric decay, one that is true in free
space but is far off in actual environments. The challenge is to model
realistic environments, including walls, obstacles, reflections and anisotropic
antennas, without making the models algorithmically impractical or analytically
intractable.
We present a simple solution that allows the modeling of arbitrary static
situations by moving from geometry to arbitrary decay spaces. The complexity of
a setting is captured by a metricity parameter Z that indicates how far the
decay space is from satisfying the triangular inequality. All results that hold
in the SINR model in general metrics carry over to decay spaces, with the
resulting time complexity and approximation depending on Z in the same way that
the original results depends on the path loss term alpha. For distributed
algorithms, that to date have appeared to necessarily depend on the planarity,
we indicate how they can be adapted to arbitrary decay spaces.
Finally, we explore the dependence on Z in the approximability of core
problems. In particular, we observe that the capacity maximization problem has
exponential upper and lower bounds in terms of Z in general decay spaces. In
Euclidean metrics and related growth-bounded decay spaces, the performance
depends on the exact metricity definition, with a polynomial upper bound in
terms of Z, but an exponential lower bound in terms of a variant parameter phi.
On the plane, the upper bound result actually yields the first approximation of
a capacity-type SINR problem that is subexponential in alpha
An Improved Distributed Algorithm for Maximal Independent Set
The Maximal Independent Set (MIS) problem is one of the basics in the study
of locality in distributed graph algorithms. This paper presents an extremely
simple randomized algorithm providing a near-optimal local complexity for this
problem, which incidentally, when combined with some recent techniques, also
leads to a near-optimal global complexity.
Classical algorithms of Luby [STOC'85] and Alon, Babai and Itai [JALG'86]
provide the global complexity guarantee that, with high probability, all nodes
terminate after rounds. In contrast, our initial focus is on the
local complexity, and our main contribution is to provide a very simple
algorithm guaranteeing that each particular node terminates after rounds, with probability at least
. The guarantee holds even if the randomness outside -hops
neighborhood of is determined adversarially. This degree-dependency is
optimal, due to a lower bound of Kuhn, Moscibroda, and Wattenhofer [PODC'04].
Interestingly, this local complexity smoothly transitions to a global
complexity: by adding techniques of Barenboim, Elkin, Pettie, and Schneider
[FOCS'12, arXiv: 1202.1983v3], we get a randomized MIS algorithm with a high
probability global complexity of ,
where denotes the maximum degree. This improves over the result of Barenboim et al., and gets close
to the lower bound of Kuhn et al.
Corollaries include improved algorithms for MIS in graphs of upper-bounded
arboricity, or lower-bounded girth, for Ruling Sets, for MIS in the Local
Computation Algorithms (LCA) model, and a faster distributed algorithm for the
Lov\'asz Local Lemma
- …