1,012 research outputs found

    Equivalence Classes and Conditional Hardness in Massively Parallel Computations

    Get PDF
    The Massively Parallel Computation (MPC) model serves as a common abstraction of many modern large-scale data processing frameworks, and has been receiving increasingly more attention over the past few years, especially in the context of classical graph problems. So far, the only way to argue lower bounds for this model is to condition on conjectures about the hardness of some specific problems, such as graph connectivity on promise graphs that are either one cycle or two cycles, usually called the one cycle vs. two cycles problem. This is unlike the traditional arguments based on conjectures about complexity classes (e.g., P ? NP), which are often more robust in the sense that refuting them would lead to groundbreaking algorithms for a whole bunch of problems. In this paper we present connections between problems and classes of problems that allow the latter type of arguments. These connections concern the class of problems solvable in a sublogarithmic amount of rounds in the MPC model, denoted by MPC(o(log N)), and some standard classes concerning space complexity, namely L and NL, and suggest conjectures that are robust in the sense that refuting them would lead to many surprisingly fast new algorithms in the MPC model. We also obtain new conditional lower bounds, and prove new reductions and equivalences between problems in the MPC model

    The Fine-Grained Complexity of Multi-Dimensional Ordering Properties

    Get PDF
    We define a class of problems whose input is an n-sized set of d-dimensional vectors, and where the problem is first-order definable using comparisons between coordinates. This class captures a wide variety of tasks, such as complex types of orthogonal range search, model-checking first-order properties on geometric intersection graphs, and elementary questions on multidimensional data like verifying Pareto optimality of a choice of data points. Focusing on constant dimension d, we show that any k-quantifier, d-dimensional such problem is solvable in O(n^{k-1} log^{d-1} n) time. Furthermore, this algorithm is conditionally tight up to subpolynomial factors: we show that assuming the 3-uniform hyperclique hypothesis, there is a k-quantifier, (3k-3)-dimensional problem in this class that requires time ?(n^{k-1-o(1)}). Towards identifying a single representative problem for this class, we study the existence of complete problems for the 3-quantifier setting (since 2-quantifier problems can already be solved in near-linear time O(nlog^{d-1} n), and k-quantifier problems with k > 3 reduce to the 3-quantifier case). We define a problem Vector Concatenated Non-Domination VCND_d (Given three sets of vectors X,Y and Z of dimension d,d and 2d, respectively, is there an x ? X and a y ? Y so that their concatenation x?y is not dominated by any z ? Z, where vector u is dominated by vector v if u_i ? v_i for each coordinate 1 ? i ? d), and determine it as the "unique" candidate to be complete for this class (under fine-grained assumptions)

    Matching Triangles and Triangle Collection: Hardness based on a Weak Quantum Conjecture

    Full text link
    Classically, for many computational problems one can conclude time lower bounds conditioned on the hardness of one or more of key problems: k-SAT, 3SUM and APSP. More recently, similar results have been derived in the quantum setting conditioned on the hardness of k-SAT and 3SUM. This is done using fine-grained reductions, where the approach is to (1) select a key problem XX that, for some function TT, is conjectured to not be solvable by any O(T(n)1−ϵ)O(T(n)^{1-\epsilon}) time algorithm for any constant ϵ>0\epsilon > 0 (in a fixed model of computation), and (2) reduce XX in a fine-grained way to these computational problems, thus giving (mostly) tight conditional time lower bounds for them. Interestingly, for Delta-Matching Triangles and Triangle Collection, classical hardness results have been derived conditioned on hardness of all three mentioned key problems. More precisely, it is proven that an n3−ϵn^{3-\epsilon} time classical algorithm for either of these two graph problems would imply faster classical algorithms for k-SAT, 3SUM and APSP, which makes Delta-Matching Triangles and Triangle Collection worthwhile to study. In this paper, we show that an n1.5−ϵn^{1.5-\epsilon} time quantum algorithm for either of these two graph problems would imply faster quantum algorithms for k-SAT, 3SUM, and APSP. We first formulate a quantum hardness conjecture for APSP and then present quantum reductions from k-SAT, 3SUM, and APSP to Delta-Matching Triangles and Triangle Collection. Additionally, based on the quantum APSP conjecture, we are also able to prove quantum lower bounds for a matrix problem and many graph problems. The matching upper bounds follow trivially for most of them, except for Delta-Matching Triangles and Triangle Collection for which we present quantum algorithms that require careful use of data structures and Ambainis' variable time search

    Conditional Hardness of Earth Mover Distance

    Get PDF
    The Earth Mover Distance (EMD) between two sets of points A, B subseteq R^d with |A| = |B| is the minimum total Euclidean distance of any perfect matching between A and B. One of its generalizations is asymmetric EMD, which is the minimum total Euclidean distance of any matching of size |A| between sets of points A,B subseteq R^d with |A| <= |B|. The problems of computing EMD and asymmetric EMD are well-studied and have many applications in computer science, some of which also ask for the EMD-optimal matching itself. Unfortunately, all known algorithms require at least quadratic time to compute EMD exactly. Approximation algorithms with nearly linear time complexity in n are known (even for finding approximately optimal matchings), but suffer from exponential dependence on the dimension. In this paper we show that significant improvements in exact and approximate algorithms for EMD would contradict conjectures in fine-grained complexity. In particular, we prove the following results: - Under the Orthogonal Vectors Conjecture, there is some c>0 such that EMD in Omega(c^{log^* n}) dimensions cannot be computed in truly subquadratic time. - Under the Hitting Set Conjecture, for every delta>0, no truly subquadratic time algorithm can find a (1 + 1/n^delta)-approximate EMD matching in omega(log n) dimensions. - Under the Hitting Set Conjecture, for every eta = 1/omega(log n), no truly subquadratic time algorithm can find a (1 + eta)-approximate asymmetric EMD matching in omega(log n) dimensions

    On Diameter Approximation in Directed Graphs

    Get PDF
    Computing the diameter of a graph, i.e. the largest distance, is a fundamental problem that is central in fine-grained complexity. In undirected graphs, the Strong Exponential Time Hypothesis (SETH) yields a lower bound on the time vs. approximation trade-off that is quite close to the upper bounds. In directed graphs, however, where only some of the upper bounds apply, much larger gaps remain. Since d(u,v) may not be the same as d(v,u), there are multiple ways to define the problem, the two most natural being the (one-way) diameter (max_(u,v) d(u,v)) and the roundtrip diameter (max_{u,v} d(u,v)+d(v,u)). In this paper we make progress on the outstanding open question for each of them. - We design the first algorithm for diameter in sparse directed graphs to achieve n^{1.5-?} time with an approximation factor better than 2. The new upper bound trade-off makes the directed case appear more similar to the undirected case. Notably, this is the first algorithm for diameter in sparse graphs that benefits from fast matrix multiplication. - We design new hardness reductions separating roundtrip diameter from directed and undirected diameter. In particular, a 1.5-approximation in subquadratic time would refute the All-Nodes k-Cycle hypothesis, and any (2-?)-approximation would imply a breakthrough algorithm for approximate ?_?-Closest-Pair. Notably, these are the first conditional lower bounds for diameter that are not based on SETH

    Brief Announcement: A Note on Hardness of Diameter Approximation

    Get PDF
    We revisit the hardness of approximating the diameter of a network. In the CONGEST model, ~Omega(n) rounds are necessary to compute the diameter [Frischknecht et al. SODA\u2712]. Abboud et al. [DISC 2016] extended this result to sparse graphs and, at a more fine-grained level, showed that, for any integer 1 <= l <= polylog(n)distinguishing between networks of diameter 4l + 2 and 6l + 1 requires ~Omega(n) rounds. We slightly tighten this result by showing that even distinguishing between diameter 2l + 1 and 3l + 1 requires ~Omega(n) rounds. The reduction of Abboud et al. is inspired by recent conditional lower bounds in the RAM model, where the orthogonal vectors problem plays a pivotal role. In our new lower bound, we make the connection to orthogonal vectors explicit, leading to a conceptually more streamlined exposition. This is suited for teaching both the lower bound in the CONGEST model and the conditional lower bound in the RAM model

    Hardness of Easy Problems: Basing Hardness on Popular Conjectures such as the Strong Exponential Time Hypothesis (Invited Talk)

    Get PDF
    Algorithmic research strives to develop fast algorithms for fundamental problems. Despite its many successes, however, many problems still do not have very efficient algorithms. For years researchers have explained the hardness for key problems by proving NP-hardness, utilizing polynomial time reductions to base the hardness of key problems on the famous conjecture P != NP. For problems that already have polynomial time algorithms, however, it does not seem that one can show any sort of hardness based on P != NP. Nevertheless, we would like to provide evidence that a problem AA with a running time O(n^k) that has not been improved in decades, also requires n^{k-o(1)} time, thus explaining the lack of progress on the problem. Such unconditional time lower bounds seem very difficult to obtain, unfortunately. Recent work has concentrated on an approach mimicking NP-hardness: (1) select a few key problems that are conjectured to require T(n) time to solve, (2) use special, fine-grained reductions to prove time lower bounds for many diverse problems in P based on the conjectured hardness of the key problems. In this abstract we outline the approach, give some examples of hardness results based on the Strong Exponential Time Hypothesis, and present an overview of some of the recent work on the topic
    • …
    corecore