6,934 research outputs found
Optimal Dynamic Distributed MIS
Finding a maximal independent set (MIS) in a graph is a cornerstone task in
distributed computing. The local nature of an MIS allows for fast solutions in
a static distributed setting, which are logarithmic in the number of nodes or
in their degrees. The result trivially applies for the dynamic distributed
model, in which edges or nodes may be inserted or deleted. In this paper, we
take a different approach which exploits locality to the extreme, and show how
to update an MIS in a dynamic distributed setting, either \emph{synchronous} or
\emph{asynchronous}, with only \emph{a single adjustment} and in a single
round, in expectation. These strong guarantees hold for the \emph{complete
fully dynamic} setting: Insertions and deletions, of edges as well as nodes,
gracefully and abruptly. This strongly separates the static and dynamic
distributed models, as super-constant lower bounds exist for computing an MIS
in the former.
Our results are obtained by a novel analysis of the surprisingly simple
solution of carefully simulating the greedy \emph{sequential} MIS algorithm
with a random ordering of the nodes. As such, our algorithm has a direct
application as a -approximation algorithm for correlation clustering. This
adds to the important toolbox of distributed graph decompositions, which are
widely used as crucial building blocks in distributed computing.
Finally, our algorithm enjoys a useful \emph{history-independence} property,
meaning the output is independent of the history of topology changes that
constructed that graph. This means the output cannot be chosen, or even biased,
by the adversary in case its goal is to prevent us from optimizing some
objective function.Comment: 19 pages including appendix and reference
On the Complexity of Case-Based Planning
We analyze the computational complexity of problems related to case-based
planning: planning when a plan for a similar instance is known, and planning
from a library of plans. We prove that planning from a single case has the same
complexity than generative planning (i.e., planning "from scratch"); using an
extended definition of cases, complexity is reduced if the domain stored in the
case is similar to the one to search plans for. Planning from a library of
cases is shown to have the same complexity. In both cases, the complexity of
planning remains, in the worst case, PSPACE-complete
Time-Space Tradeoffs for the Memory Game
A single-player game of Memory is played with distinct pairs of cards,
with the cards in each pair bearing identical pictures. The cards are laid
face-down. A move consists of revealing two cards, chosen adaptively. If these
cards match, i.e., they bear the same picture, they are removed from play;
otherwise, they are turned back to face down. The object of the game is to
clear all cards while minimizing the number of moves. Past works have
thoroughly studied the expected number of moves required, assuming optimal play
by a player has that has perfect memory. In this work, we study the Memory game
in a space-bounded setting.
We prove two time-space tradeoff lower bounds on algorithms (strategies for
the player) that clear all cards in moves while using at most bits of
memory. First, in a simple model where the pictures on the cards may only be
compared for equality, we prove that . This is tight:
it is easy to achieve essentially everywhere on this
tradeoff curve. Second, in a more general model that allows arbitrary
computations, we prove that . We prove this latter tradeoff
by modeling strategies as branching programs and extending a classic counting
argument of Borodin and Cook with a novel probabilistic argument. We conjecture
that the stronger tradeoff in fact holds even in
this general model
- âŠ