6,862 research outputs found

    On the Fine-Grained Complexity of Parity Problems

    Get PDF
    We consider the parity variants of basic problems studied in fine-grained complexity. We show that finding the exact solution is just as hard as finding its parity (i.e. if the solution is even or odd) for a large number of classical problems, including All-Pairs Shortest Paths (APSP), Diameter, Radius, Median, Second Shortest Path, Maximum Consecutive Subsums, Min-Plus Convolution, and 0/1-Knapsack. A direct reduction from a problem to its parity version is often difficult to design. Instead, we revisit the existing hardness reductions and tailor them in a problem-specific way to the parity version. Nearly all reductions from APSP in the literature proceed via the (subcubic-equivalent but simpler) Negative Weight Triangle (NWT) problem. Our new modified reductions also start from NWT or a non-standard parity variant of it. We are not able to establish a subcubic-equivalence with the more natural parity counting variant of NWT, where we ask if the number of negative triangles is even or odd. Perhaps surprisingly, we justify this by designing a reduction from the seemingly-harder Zero Weight Triangle problem, showing that parity is (conditionally) strictly harder than decision for NWT

    QSETH strikes again: finer quantum lower bounds for lattice problem, strong simulation, hitting set problem, and more

    Full text link
    While seemingly undesirable, it is not a surprising fact that there are certain problems for which quantum computers offer no computational advantage over their respective classical counterparts. Moreover, there are problems for which there is no `useful' computational advantage possible with the current quantum hardware. This situation however can be beneficial if we don't want quantum computers to solve certain problems fast - say problems relevant to post-quantum cryptography. In such a situation, we would like to have evidence that it is difficult to solve those problems on quantum computers; but what is their exact complexity? To do so one has to prove lower bounds, but proving unconditional time lower bounds has never been easy. As a result, resorting to conditional lower bounds has been quite popular in the classical community and is gaining momentum in the quantum community. In this paper, by the use of the QSETH framework [Buhrman-Patro-Speelman 2021], we are able to understand the quantum complexity of a few natural variants of CNFSAT, such as parity-CNFSAT or counting-CNFSAT, and also are able to comment on the non-trivial complexity of approximate-#CNFSAT; both of these have interesting implications about the complexity of (variations of) lattice problems, strong simulation and hitting set problem, and more. In the process, we explore the QSETH framework in greater detail than was (required and) discussed in the original paper, thus also serving as a useful guide on how to effectively use the QSETH framework.Comment: 34 pages, 2 tables, 2 figure

    A Framework of Quantum Strong Exponential-Time Hypotheses

    Get PDF

    A Framework of Quantum Strong Exponential-Time Hypotheses

    Get PDF

    Dynamic Boolean Formula Evaluation

    Get PDF
    We present a linear space data structure for Dynamic Evaluation of k-CNF Boolean Formulas which achieves O(m^{1-1/k}) query and variable update time where m is the number of clauses in the formula and clauses are of size at most a constant k. Our algorithm is additionally able to count the total number of satisfied clauses. We then show how this data structure can be parallelized in the PRAM model to achieve O(log m) span (i.e. parallel time) and still O(m^{1-1/k}) work. This parallel algorithm works in the stronger Binary Fork model. We then give a series of lower bounds on the problem including an average-case result showing the lower bounds hold even when the updates to the variables are chosen at random. Specifically, a reduction from k-Clique shows that dynamically counting the number of satisfied clauses takes time at least n^{(2?-3)/6 ?{2k} -1 -o(?k)}, where 2 ? ? < 2.38 is the matrix multiplication constant. We show the Combinatorial k-Clique Hypothesis implies a lower bound of m^{(1-k^{-1/2})(1-o(1))} which suggests our algorithm is close to optimal without involving Matrix Multiplication or new techniques. We next give an average-case reduction to k-clique showing the prior lower bounds hold even when the updates are chosen at random. We use our conditional lower bound to show any Binary Fork algorithm solving these problems requires at least ?(log m) span, which is tight against our algorithm in this model. Finally, we give an unconditional linear space lower bound for Dynamic k-CNF Boolean Formula Evaluation

    Distributed PCP Theorems for Hardness of Approximation in P

    Get PDF
    We present a new distributed model of probabilistically checkable proofs (PCP). A satisfying assignment x{0,1}nx \in \{0,1\}^n to a CNF formula φ\varphi is shared between two parties, where Alice knows x1,,xn/2x_1, \dots, x_{n/2}, Bob knows xn/2+1,,xnx_{n/2+1},\dots,x_n, and both parties know φ\varphi. The goal is to have Alice and Bob jointly write a PCP that xx satisfies φ\varphi, while exchanging little or no information. Unfortunately, this model as-is does not allow for nontrivial query complexity. Instead, we focus on a non-deterministic variant, where the players are helped by Merlin, a third party who knows all of xx. Using our framework, we obtain, for the first time, PCP-like reductions from the Strong Exponential Time Hypothesis (SETH) to approximation problems in P. In particular, under SETH we show that there are no truly-subquadratic approximation algorithms for Bichromatic Maximum Inner Product over {0,1}-vectors, Bichromatic LCS Closest Pair over permutations, Approximate Regular Expression Matching, and Diameter in Product Metric. All our inapproximability factors are nearly-tight. In particular, for the first two problems we obtain nearly-polynomial factors of 2(logn)1o(1)2^{(\log n)^{1-o(1)}}; only (1+o(1))(1+o(1))-factor lower bounds (under SETH) were known before

    How to Handle Assumptions in Synthesis

    Full text link
    The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.Comment: In Proceedings SYNT 2014, arXiv:1407.493
    corecore