4,350 research outputs found

    Fine-Grained Reductions from Approximate Counting to Decision

    Get PDF
    In this paper, we introduce a general framework for fine-grained reductions of approximate counting problems to their decision versions. (Thus we use an oracle that decides whether any witness exists to multiplicatively approximate the number of witnesses with minimal overhead.) This mirrors a foundational result of Sipser (STOC 1983) and Stockmeyer (SICOMP 1985) in the polynomial-time setting, and a similar result of M\"uller (IWPEC 2006) in the FPT setting. Using our framework, we obtain such reductions for some of the most important problems in fine-grained complexity: the Orthogonal Vectors problem, 3SUM, and the Negative-Weight Triangle problem (which is closely related to All-Pairs Shortest Path). We also provide a fine-grained reduction from approximate #SAT to SAT. Suppose the Strong Exponential Time Hypothesis (SETH) is false, so that for some 1<c<21<c<2 and all kk there is an O(cn)O(c^n)-time algorithm for k-SAT. Then we prove that for all kk, there is an O((c+o(1))n)O((c+o(1))^n)-time algorithm for approximate #kk-SAT. In particular, our result implies that the Exponential Time Hypothesis (ETH) is equivalent to the seemingly-weaker statement that there is no algorithm to approximate #3-SAT to within a factor of 1+ϵ1+\epsilon in time 2o(n)/ϵ22^{o(n)}/\epsilon^2 (taking ϵ>0\epsilon > 0 as part of the input).Comment: An extended abstract was presented at STOC 201

    Towards a complexity theory for the congested clique

    Full text link
    The congested clique model of distributed computing has been receiving attention as a model for densely connected distributed systems. While there has been significant progress on the side of upper bounds, we have very little in terms of lower bounds for the congested clique; indeed, it is now know that proving explicit congested clique lower bounds is as difficult as proving circuit lower bounds. In this work, we use various more traditional complexity-theoretic tools to build a clearer picture of the complexity landscape of the congested clique: -- Nondeterminism and beyond: We introduce the nondeterministic congested clique model (analogous to NP) and show that there is a natural canonical problem family that captures all problems solvable in constant time with nondeterministic algorithms. We further generalise these notions by introducing the constant-round decision hierarchy (analogous to the polynomial hierarchy). -- Non-constructive lower bounds: We lift the prior non-uniform counting arguments to a general technique for proving non-constructive uniform lower bounds for the congested clique. In particular, we prove a time hierarchy theorem for the congested clique, showing that there are decision problems of essentially all complexities, both in the deterministic and nondeterministic settings. -- Fine-grained complexity: We map out relationships between various natural problems in the congested clique model, arguing that a reduction-based complexity theory currently gives us a fairly good picture of the complexity landscape of the congested clique

    On the Fine-Grained Complexity of Parity Problems

    Get PDF
    We consider the parity variants of basic problems studied in fine-grained complexity. We show that finding the exact solution is just as hard as finding its parity (i.e. if the solution is even or odd) for a large number of classical problems, including All-Pairs Shortest Paths (APSP), Diameter, Radius, Median, Second Shortest Path, Maximum Consecutive Subsums, Min-Plus Convolution, and 0/1-Knapsack. A direct reduction from a problem to its parity version is often difficult to design. Instead, we revisit the existing hardness reductions and tailor them in a problem-specific way to the parity version. Nearly all reductions from APSP in the literature proceed via the (subcubic-equivalent but simpler) Negative Weight Triangle (NWT) problem. Our new modified reductions also start from NWT or a non-standard parity variant of it. We are not able to establish a subcubic-equivalence with the more natural parity counting variant of NWT, where we ask if the number of negative triangles is even or odd. Perhaps surprisingly, we justify this by designing a reduction from the seemingly-harder Zero Weight Triangle problem, showing that parity is (conditionally) strictly harder than decision for NWT

    Hardness Amplification of Optimization Problems

    Get PDF
    In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium

    Fine-Grained Completeness for Optimization in P

    Get PDF
    We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime. Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the kk-XOR problem. Specifically, we define MaxSP as the class of problems definable as maxx1,,xk#{(y1,,y):ϕ(x1,,xk,y1,,y)}\max_{x_1,\dots,x_k} \#\{ (y_1,\dots,y_\ell) : \phi(x_1,\dots,x_k, y_1,\dots,y_\ell) \}, where ϕ\phi is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On mm-sized structures, we can solve each such problem in time O(mk+1)O(m^{k+\ell-1}). Our results are: - We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under *deterministic* fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of O(mk+1)O(m^{k+\ell-1}) for *all* problems in MaxSP/MinSP by a polynomial factor. - This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic cc-approximation would give a (c+ε)(c+\varepsilon)-approximation for all MaxSP/MinSP problems in time O(mk+1δ)O(m^{k+\ell-1-\delta}), where ε>0\varepsilon > 0 can be chosen arbitrarily small. Combining our completeness with~(Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is *equivalent* to giving a O(1)O(1)-approximation for all MinSP problems in faster-than-O(mk+1)O(m^{k+\ell-1}) time.Comment: Full version of APPROX'21 paper, abstract shortened to fit ArXiv requirement

    Fine-Grained Completeness for Optimization in P

    Get PDF
    We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime. Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the kk-XOR problem. Specifically, we define MaxSP as the class of problems definable as maxx1,,xk#{(y1,,y):ϕ(x1,,xk,y1,,y)}\max_{x_1,\dots,x_k} \#\{ (y_1,\dots,y_\ell) : \phi(x_1,\dots,x_k, y_1,\dots,y_\ell) \}, where ϕ\phi is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On mm-sized structures, we can solve each such problem in time O(mk+1)O(m^{k+\ell-1}). Our results are: - We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under *deterministic* fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of O(mk+1)O(m^{k+\ell-1}) for *all* problems in MaxSP/MinSP by a polynomial factor. - This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic cc-approximation would give a (c+ε)(c+\varepsilon)-approximation for all MaxSP/MinSP problems in time O(mk+1δ)O(m^{k+\ell-1-\delta}), where ε>0\varepsilon > 0 can be chosen arbitrarily small. Combining our completeness with~(Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is *equivalent* to giving a O(1)O(1)-approximation for all MinSP problems in faster-than-O(mk+1)O(m^{k+\ell-1}) time

    Fine-Grained Completeness for Optimization in P

    Get PDF

    Equivalence Classes and Conditional Hardness in Massively Parallel Computations

    Get PDF
    The Massively Parallel Computation (MPC) model serves as a common abstraction of many modern large-scale data processing frameworks, and has been receiving increasingly more attention over the past few years, especially in the context of classical graph problems. So far, the only way to argue lower bounds for this model is to condition on conjectures about the hardness of some specific problems, such as graph connectivity on promise graphs that are either one cycle or two cycles, usually called the one cycle vs. two cycles problem. This is unlike the traditional arguments based on conjectures about complexity classes (e.g., P ? NP), which are often more robust in the sense that refuting them would lead to groundbreaking algorithms for a whole bunch of problems. In this paper we present connections between problems and classes of problems that allow the latter type of arguments. These connections concern the class of problems solvable in a sublogarithmic amount of rounds in the MPC model, denoted by MPC(o(log N)), and some standard classes concerning space complexity, namely L and NL, and suggest conjectures that are robust in the sense that refuting them would lead to many surprisingly fast new algorithms in the MPC model. We also obtain new conditional lower bounds, and prove new reductions and equivalences between problems in the MPC model
    corecore