308 research outputs found
Equivalence Classes and Conditional Hardness in Massively Parallel Computations
The Massively Parallel Computation (MPC) model serves as a common abstraction of many modern large-scale data processing frameworks, and has been receiving increasingly more attention over the past few years, especially in the context of classical graph problems. So far, the only way to argue lower bounds for this model is to condition on conjectures about the hardness of some specific problems, such as graph connectivity on promise graphs that are either one cycle or two cycles, usually called the one cycle vs. two cycles problem. This is unlike the traditional arguments based on conjectures about complexity classes (e.g., P ? NP), which are often more robust in the sense that refuting them would lead to groundbreaking algorithms for a whole bunch of problems.
In this paper we present connections between problems and classes of problems that allow the latter type of arguments. These connections concern the class of problems solvable in a sublogarithmic amount of rounds in the MPC model, denoted by MPC(o(log N)), and some standard classes concerning space complexity, namely L and NL, and suggest conjectures that are robust in the sense that refuting them would lead to many surprisingly fast new algorithms in the MPC model. We also obtain new conditional lower bounds, and prove new reductions and equivalences between problems in the MPC model
Improved Deterministic Connectivity in Massively Parallel Computation
A long line of research about connectivity in the Massively Parallel Computation model has culminated in the seminal works of Andoni et al. [FOCS\u2718] and Behnezhad et al. [FOCS\u2719]. They provide a randomized algorithm for low-space MPC with conjectured to be optimal round complexity O(log D + log log_{m/n} n) and O(m) space, for graphs on n vertices with m edges and diameter D. Surprisingly, a recent result of Coy and Czumaj [STOC\u2722] shows how to achieve the same deterministically. Unfortunately, however, their algorithm suffers from large local computation time.
We present a deterministic connectivity algorithm that matches all the parameters of the randomized algorithm and, in addition, significantly reduces the local computation time to nearly linear.
Our derandomization method is based on reducing the amount of randomness needed to allow for a simpler efficient search. While similar randomness reduction approaches have been used before, our result is not only strikingly simpler, but it is the first to have efficient local computation. This is why we believe it to serve as a starting point for the systematic development of computation-efficient derandomization approaches in low-memory MPC
Adaptive Massively Parallel Constant-Round Tree Contraction
Miller and Reif's FOCS'85 classic and fundamental tree contraction algorithm
is a broadly applicable technique for the parallel solution of a large number
of tree problems. Additionally it is also used as an algorithmic design
technique for a large number of parallel graph algorithms. In all previously
explored models of computation, however, tree contractions have only been
achieved in rounds of parallel run time. In this work, we not
only introduce a generalized tree contraction method but also show it can be
computed highly efficiently in rounds in the Adaptive
Massively Parallel Computing (AMPC) setting, where each machine has
local memory for some . AMPC is a practical
extension of Massively Parallel Computing (MPC) which utilizes distributed hash
tables. In general, MPC is an abstract model for MapReduce, Hadoop, Spark, and
Flume which are currently widely used across industry and has been studied
extensively in the theory community in recent years. Last but not least, we
show that our results extend to multiple problems on trees, including but not
limited to maximum and maximal matching, maximum and maximal independent set,
tree isomorphism testing, and more.Comment: 35 pages, 3 figures, to be published in Innovations in Theoretical
Computer Science (ITCS
Parallel Graph Algorithms in Constant Adaptive Rounds: Theory meets Practice
We study fundamental graph problems such as graph connectivity, minimum
spanning forest (MSF), and approximate maximum (weight) matching in a
distributed setting. In particular, we focus on the Adaptive Massively Parallel
Computation (AMPC) model, which is a theoretical model that captures
MapReduce-like computation augmented with a distributed hash table.
We show the first AMPC algorithms for all of the studied problems that run in
a constant number of rounds and use only space per machine,
where . Our results improve both upon the previous results in
the AMPC model, as well as the best-known results in the MPC model, which is
the theoretical model underpinning many popular distributed computation
frameworks, such as MapReduce, Hadoop, Beam, Pregel and Giraph.
Finally, we provide an empirical comparison of the algorithms in the MPC and
AMPC models in a fault-tolerant distriubted computation environment. We
empirically evaluate our algorithms on a set of large real-world graphs and
show that our AMPC algorithms can achieve improvements in both running time and
round-complexity over optimized MPC baselines
Deterministic massively parallel connectivity
We consider the problem of designing fundamental graph algorithms on the model of Massive Parallel Computation (MPC). The input to the problem is an undirected grap
Proceedings of the Workshop on Linear Logic and Logic Programming
Declarative programming languages often fail to effectively address many aspects of control and resource management. Linear logic provides a framework for increasing the strength of declarative programming languages to embrace these aspects. Linear logic has been used to provide new analyses of Prolog\u27s operational semantics, including left-to-right/depth-first search and negation-as-failure. It has also been used to design new logic programming languages for handling concurrency and for viewing program clauses as (possibly) limited resources. Such logic programming languages have proved useful in areas such as databases, object-oriented programming, theorem proving, and natural language parsing.
This workshop is intended to bring together researchers involved in all aspects of relating linear logic and logic programming. The proceedings includes two high-level overviews of linear logic, and six contributed papers.
Workshop organizers: Jean-Yves Girard (CNRS and University of Paris VII), Dale Miller (chair, University of Pennsylvania, Philadelphia), and Remo Pareschi, (ECRC, Munich)
Fully Scalable Massively Parallel Algorithms for Embedded Planar Graphs
We consider the massively parallel computation (MPC) model, which is a
theoretical abstraction of large-scale parallel processing models such as
MapReduce. In this model, assuming the widely believed 1-vs-2-cycles
conjecture, solving many basic graph problems in rounds with a strongly
sublinear memory size per machine is impossible. We improve on the recent work
of Holm and T\v{e}tek [SODA 2023] that bypass this barrier for problems when a
planar embedding of the graph is given. In the previous work, on graphs of size
with machines, the memory size per machine needs to be
at least , whereas we extend their work to the
fully scalable regime, where the memory size per machine can be for any constant . We give the first constant round
fully scalable algorithms for embedded planar graphs for the problems of (i)
connectivity and (ii) minimum spanning tree (MST). Moreover, we show that the
-emulator of Chang, Krauthgamer, and Tan [STOC 2022] can be
incorporated into our recursive framework to obtain constant-round
-approximation algorithms for the problems of computing (iii)
single source shortest path (SSSP), (iv) global min-cut, and (v) -max flow.
All previous results on cuts and flows required linear memory in the MPC model.
Furthermore, our results give new algorithms for problems that implicitly
involve embedded planar graphs. We give as corollaries constant round fully
scalable algorithms for (vi) 2D Euclidean MST using total memory and
(vii) -approximate weighted edit distance using
memory.
Our main technique is a recursive framework combined with novel graph drawing
algorithms to compute smaller embedded planar graphs in constant rounds in the
fully scalable setting.Comment: To appear in SODA24. 55 pages, 9 figures, 1 table. Added section on
weighted edit distance and shortened abstrac
- …