40,375 research outputs found
Locality in Online, Dynamic, Sequential, and Distributed Graph Algorithms
In this work, we give a unifying view of locality in four settings: distributed algorithms, sequential greedy algorithms, dynamic algorithms, and online algorithms. We introduce a new model of computing, called the online-LOCAL model: the adversary presents the nodes of the input graph one by one, in the same way as in classical online algorithms, but for each node we get to see its radius-T neighborhood before choosing the output. Instead of looking ahead in time, we have the power of looking around in space.
We compare the online-LOCAL model with three other models: the LOCAL model of distributed computing, where each node produces its output based on its radius-T neighborhood, the SLOCAL model, which is the sequential counterpart of LOCAL, and the dynamic-LOCAL model, where changes in the dynamic input graph only influence the radius-T neighborhood of the point of change.
The SLOCAL and dynamic-LOCAL models are sandwiched between the LOCAL and online-LOCAL models. In general, all four models are distinct, but we study in particular locally checkable labeling problems (LCLs), which is a family of graph problems extensively studied in the context of distributed graph algorithms. We prove that for LCL problems in paths, cycles, and rooted trees, all four models are roughly equivalent: the locality of any LCL problem falls in the same broad class - O(log* n), ?(log n), or n^?(1) - in all four models. In particular, this result enables one to generalize prior lower-bound results from the LOCAL model to all four models, and it also allows one to simulate e.g. dynamic-LOCAL algorithms efficiently in the LOCAL model.
We also show that this equivalence does not hold in two-dimensional grids or general bipartite graphs. We provide an online-LOCAL algorithm with locality O(log n) for the 3-coloring problem in bipartite graphs - this is a problem with locality ?(n^{1/2}) in the LOCAL model and ?(n^{1/10}) in the SLOCAL model
On Conceptually Simple Algorithms for Variants of Online Bipartite Matching
We present a series of results regarding conceptually simple algorithms for
bipartite matching in various online and related models. We first consider a
deterministic adversarial model. The best approximation ratio possible for a
one-pass deterministic online algorithm is , which is achieved by any
greedy algorithm. D\"urr et al. recently presented a -pass algorithm called
Category-Advice that achieves approximation ratio . We extend their
algorithm to multiple passes. We prove the exact approximation ratio for the
-pass Category-Advice algorithm for all , and show that the
approximation ratio converges to the inverse of the golden ratio
as goes to infinity. The convergence is
extremely fast --- the -pass Category-Advice algorithm is already within
of the inverse of the golden ratio.
We then consider a natural greedy algorithm in the online stochastic IID
model---MinDegree. This algorithm is an online version of a well-known and
extensively studied offline algorithm MinGreedy. We show that MinDegree cannot
achieve an approximation ratio better than , which is guaranteed by any
consistent greedy algorithm in the known IID model.
Finally, following the work in Besser and Poloczek, we depart from an
adversarial or stochastic ordering and investigate a natural randomized
algorithm (MinRanking) in the priority model. Although the priority model
allows the algorithm to choose the input ordering in a general but well defined
way, this natural algorithm cannot obtain the approximation of the Ranking
algorithm in the ROM model
Recommendation Subgraphs for Web Discovery
Recommendations are central to the utility of many websites including
YouTube, Quora as well as popular e-commerce stores. Such sites typically
contain a set of recommendations on every product page that enables visitors to
easily navigate the website. Choosing an appropriate set of recommendations at
each page is one of the key features of backend engines that have been deployed
at several e-commerce sites.
Specifically at BloomReach, an engine consisting of several independent
components analyzes and optimizes its clients' websites. This paper focuses on
the structure optimizer component which improves the website navigation
experience that enables the discovery of novel content.
We begin by formalizing the concept of recommendations used for discovery. We
formulate this as a natural graph optimization problem which in its simplest
case, reduces to a bipartite matching problem. In practice, solving these
matching problems requires superlinear time and is not scalable. Also,
implementing simple algorithms is critical in practice because they are
significantly easier to maintain in production. This motivated us to analyze
three methods for solving the problem in increasing order of sophistication: a
sampling algorithm, a greedy algorithm and a more involved partitioning based
algorithm.
We first theoretically analyze the performance of these three methods on
random graph models characterizing when each method will yield a solution of
sufficient quality and the parameter ranges when more sophistication is needed.
We complement this by providing an empirical analysis of these algorithms on
simulated and real-world production data. Our results confirm that it is not
always necessary to implement complicated algorithms in the real-world and that
very good practical results can be obtained by using heuristics that are backed
by the confidence of concrete theoretical guarantees
Greedy MAXCUT Algorithms and their Information Content
MAXCUT defines a classical NP-hard problem for graph partitioning and it
serves as a typical case of the symmetric non-monotone Unconstrained Submodular
Maximization (USM) problem. Applications of MAXCUT are abundant in machine
learning, computer vision and statistical physics. Greedy algorithms to
approximately solve MAXCUT rely on greedy vertex labelling or on an edge
contraction strategy. These algorithms have been studied by measuring their
approximation ratios in the worst case setting but very little is known to
characterize their robustness to noise contaminations of the input data in the
average case. Adapting the framework of Approximation Set Coding, we present a
method to exactly measure the cardinality of the algorithmic approximation sets
of five greedy MAXCUT algorithms. Their information contents are explored for
graph instances generated by two different noise models: the edge reversal
model and Gaussian edge weights model. The results provide insights into the
robustness of different greedy heuristics and techniques for MAXCUT, which can
be used for algorithm design of general USM problems.Comment: This is a longer version of the paper published in 2015 IEEE
Information Theory Workshop (ITW
- …