325 research outputs found
Balanced Allocations: A Simple Proof for the Heavily Loaded Case
We provide a relatively simple proof that the expected gap between the
maximum load and the average load in the two choice process is bounded by
, irrespective of the number of balls thrown. The theorem
was first proven by Berenbrink et al. Their proof uses heavy machinery from
Markov-Chain theory and some of the calculations are done using computers. In
this manuscript we provide a significantly simpler proof that is not aided by
computers and is self contained. The simplification comes at a cost of weaker
bounds on the low order terms and a weaker tail bound for the probability of
deviating from the expectation
Parallel Load Balancing on Constrained Client-Server Topologies
We study parallel \emph{Load Balancing} protocols for a client-server
distributed model defined as follows.
There is a set \sC of clients and a set \sS of servers where each
client has
(at most) a constant number of requests that must be assigned to
some server. The client set and the server one are connected to each other via
a fixed bipartite graph: the requests of client can only be sent to the
servers in its neighborhood . The goal is to assign every client request
so as to minimize the maximum load of the servers.
In this setting, efficient parallel protocols are available only for dense
topolgies. In particular, a simple symmetric, non-adaptive protocol achieving
constant maximum load has been recently introduced by Becchetti et al
\cite{BCNPT18} for regular dense bipartite graphs. The parallel completion time
is \bigO(\log n) and the overall work is \bigO(n), w.h.p.
Motivated by proximity constraints arising in some client-server systems, we
devise a simple variant of Becchetti et al's protocol \cite{BCNPT18} and we
analyse it over almost-regular bipartite graphs where nodes may have
neighborhoods of small size. In detail, we prove that, w.h.p., this new version
has a cost equivalent to that of Becchetti et al's protocol (in terms of
maximum load, completion time, and work complexity, respectively) on every
almost-regular bipartite graph with degree .
Our analysis significantly departs from that in \cite{BCNPT18} for the
original protocol and requires to cope with non-trivial stochastic-dependence
issues on the random choices of the algorithmic process which are due to the
worst-case, sparse topology of the underlying graph
Random walks which prefer unvisited edges : exploring high girth even degree expanders in linear time.
Let G = (V,E) be a connected graph with |V | = n vertices. A simple random walk on the vertex set of G is a process, which at each step moves from its current vertex position to a neighbouring vertex chosen uniformly at random. We consider a modified walk which, whenever possible, chooses an unvisited edge for the next transition; and makes a simple random walk otherwise. We call such a walk an edge-process (or E -process). The rule used to choose among unvisited edges at any step has no effect on our analysis. One possible method is to choose an unvisited edge uniformly at random, but we impose no such restriction. For the class of connected even degree graphs of constant maximum degree, we bound the vertex cover time of the E -process in terms of the edge expansion rate of the graph G, as measured by eigenvalue gap 1 -λmax of the transition matrix of a simple random walk on G. A vertex v is â -good, if any even degree subgraph containing all edges incident with v contains at least â vertices. A graph G is â -good, if every vertex has the â -good property. Let G be an even degree â -good expander of bounded maximum degree. Any E -process on G has vertex cover time
equation image
This is to be compared with the Ω(nlog n) lower bound on the cover time of any connected graph by a weighted random walk. Our result is independent of the rule used to select the order of the unvisited edges, which could, for example, be chosen on-line by an adversary. © 2013 Wiley Periodicals, Inc. Random Struct. Alg., 00, 000â000, 2013
As no walk based process can cover an n vertex graph in less than n - 1 steps, the cover time of the E -process is of optimal order when â =Î (log n). With high probability random r -regular graphs, r â„ 4 even, have â =Ω (log n). Thus the vertex cover time of the E -process on such graphs is Î(n)
Utilitarian resource assignment
This paper studies a resource allocation problem introduced by Koutsoupias
and Papadimitriou. The scenario is modelled as a multiple-player game in which
each player selects one of a finite number of known resources. The cost to the
player is the total weight of all players who choose that resource, multiplied
by the ``delay'' of that resource. Recent papers have studied the Nash
equilibria and social optima of this game in terms of the cost
metric, in which the social cost is taken to be the maximum cost to any player.
We study the variant of this game, in which the social cost is taken to
be the sum of the costs to the individual players, rather than the maximum of
these costs. We give bounds on the size of the coordination ratio, which is the
ratio between the social cost incurred by selfish behavior and the optimal
social cost; we also study the algorithmic problem of finding optimal
(lowest-cost) assignments and Nash Equilibria. Additionally, we obtain bounds
on the ratio between alternative Nash equilibria for some special cases of the
problem.Comment: 19 page
Tight Load Balancing via Randomized Local Search
We consider the following balls-into-bins process with bins and
balls: each ball is equipped with a mutually independent exponential clock of
rate 1. Whenever a ball's clock rings, the ball samples a random bin and moves
there if the number of balls in the sampled bin is smaller than in its current
bin. This simple process models a typical load balancing problem where users
(balls) seek a selfish improvement of their assignment to resources (bins).
From a game theoretic perspective, this is a randomized approach to the
well-known Koutsoupias-Papadimitriou model, while it is known as randomized
local search (RLS) in load balancing literature. Up to now, the best bound on
the expected time to reach perfect balance was due to Ganesh, Lilienthal, Manjunath, Proutiere, and Simatos
(Load balancing via random local search in closed and open systems, Queueing
Systems, 2012). We improve this to an asymptotically tight
. Our analysis is based on the crucial observation
that performing "destructive moves" (reversals of RLS moves) cannot decrease
the balancing time. This allows us to simplify problem instances and to ignore
"inconvenient moves" in the analysis.Comment: 24 pages, 3 figures, preliminary version appeared in proceedings of
2017 IEEE International Parallel and Distributed Processing Symposium
(IPDPS'17
Palindrome Recognition In The Streaming Model
In the Palindrome Problem one tries to find all palindromes (palindromic
substrings) in a given string. A palindrome is defined as a string which reads
forwards the same as backwards, e.g., the string "racecar". A related problem
is the Longest Palindromic Substring Problem in which finding an arbitrary one
of the longest palindromes in the given string suffices. We regard the
streaming version of both problems. In the streaming model the input arrives
over time and at every point in time we are only allowed to use sublinear
space. The main algorithms in this paper are the following: The first one is a
one-pass randomized algorithm that solves the Palindrome Problem. It has an
additive error and uses ) space. The second algorithm is a two-pass
algorithm which determines the exact locations of all longest palindromes. It
uses the first algorithm as the first pass. The third algorithm is again a
one-pass randomized algorithm, which solves the Longest Palindromic Substring
Problem. It has a multiplicative error using only space. We also
give two variants of the first algorithm which solve other related practical
problems
Bounds on the Voter Model in Dynamic Networks
In the voter model, each node of a graph has an opinion, and in every round
each node chooses independently a random neighbour and adopts its opinion. We
are interested in the consensus time, which is the first point in time where
all nodes have the same opinion. We consider dynamic graphs in which the edges
are rewired in every round (by an adversary) giving rise to the graph sequence
, where we assume that has conductance at least
. We assume that the degrees of nodes don't change over time as one can
show that the consensus time can become super-exponential otherwise. In the
case of a sequence of -regular graphs, we obtain asymptotically tight
results. Even for some static graphs, such as the cycle, our results improve
the state of the art. Here we show that the expected number of rounds until all
nodes have the same opinion is bounded by , for any
graph with edges, conductance , and degrees at least . In
addition, we consider a biased dynamic voter model, where each opinion is
associated with a probability , and when a node chooses a neighbour with
that opinion, it adopts opinion with probability (otherwise the node
keeps its current opinion). We show for any regular dynamic graph, that if
there is an difference between the highest and second highest
opinion probabilities, and at least nodes have initially the
opinion with the highest probability, then all nodes adopt w.h.p. that opinion.
We obtain a bound on the convergences time, which becomes for
static graphs
Parallel Balanced Allocations: The Heavily Loaded Case
We study parallel algorithms for the classical balls-into-bins problem, in
which balls acting in parallel as separate agents are placed into bins.
Algorithms operate in synchronous rounds, in each of which balls and bins
exchange messages once. The goal is to minimize the maximal load over all bins
using a small number of rounds and few messages.
While the case of balls has been extensively studied, little is known
about the heavily loaded case. In this work, we consider parallel algorithms
for this somewhat neglected regime of . The naive solution of
allocating each ball to a bin chosen uniformly and independently at random
results in maximal load (for ) w.h.p. In contrast, for the sequential setting Berenbrink et al (SIAM J.
Comput 2006) showed that letting each ball join the least loaded bin of two
randomly selected bins reduces the maximal load to w.h.p.
To date, no parallel variant of such a result is known.
We present a simple parallel threshold algorithm that obtains a maximal load
of w.h.p. within rounds. The algorithm
is symmetric (balls and bins all "look the same"), and balls send
messages in expectation per round. The additive term of in the
complexity is known to be tight for such algorithms (Lenzen and Wattenhofer
Distributed Computing 2016). We also prove that our analysis is tight, i.e.,
algorithms of the type we provide must run for rounds w.h.p.
Finally, we give a simple asymmetric algorithm (i.e., balls are aware of a
common labeling of the bins) that achieves a maximal load of in a
constant number of rounds w.h.p. Again, balls send only a single message per
round, and bins receive messages w.h.p
- âŠ