43 research outputs found

    More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture

    Get PDF
    The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no ϵ>0\epsilon>0 for which an O(N2ϵ)poly(D)O(N^{2-\epsilon})\mathrm{poly}(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size NN that contains DD-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ϵ>0\epsilon>0 such that: (1) For all dd and all large enough kk, there is a randomized algorithm that takes O(n(1ϵ)k)O(n^{(1-\epsilon)k}) time to solve the Zero-Weight-kk-Clique and Min-Weight-kk-Clique problems on dd-hypergraphs with nn vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. (2) For all cc, the satisfiability of sparse TC1 circuits on nn inputs (that is, circuits with cncn wires, depth clognc\log n, and negation, AND, OR, and threshold gates) can be computed in time O((2ϵ)n){O((2-\epsilon)^n)}

    Consequences of APSP, triangle detection, and 3SUM hardness for separation between determinism and non-determinism

    Full text link
    We present implications from the known conjectures like APSP, 3SUM and ETH in a form of a negated containment of a linear-time with a non-deterministic logarithmic-bit oracle in a respective deterministic bounded-time class They are different for different conjectures and they exhibit in particular the dependency on the input range parameters.Comment: The section on range reduction in the previous version contained a flaw in a proof and therefore it has been remove

    Coarse-grained complexity for dynamic algorithms

    Get PDF
    To date, the only way to argue polynomial lower bounds for dynamic algorithms is via fine-grained complexity arguments. These arguments rely on strong assumptions about specific problems such as the Strong Exponential Time Hypothesis (SETH) and the Online Matrix-Vector Multiplication Conjecture (OMv). While they have led to many exciting discoveries, dynamic algorithms still miss out some benefits and lessons from the traditional “coarse-grained” approach that relates together classes of problems such as P and NP. In this paper we initiate the study of coarse-grained complexity theory for dynamic algorithms. Below are among questions that this theory can answer

    QSETH strikes again: finer quantum lower bounds for lattice problem, strong simulation, hitting set problem, and more

    Full text link
    While seemingly undesirable, it is not a surprising fact that there are certain problems for which quantum computers offer no computational advantage over their respective classical counterparts. Moreover, there are problems for which there is no `useful' computational advantage possible with the current quantum hardware. This situation however can be beneficial if we don't want quantum computers to solve certain problems fast - say problems relevant to post-quantum cryptography. In such a situation, we would like to have evidence that it is difficult to solve those problems on quantum computers; but what is their exact complexity? To do so one has to prove lower bounds, but proving unconditional time lower bounds has never been easy. As a result, resorting to conditional lower bounds has been quite popular in the classical community and is gaining momentum in the quantum community. In this paper, by the use of the QSETH framework [Buhrman-Patro-Speelman 2021], we are able to understand the quantum complexity of a few natural variants of CNFSAT, such as parity-CNFSAT or counting-CNFSAT, and also are able to comment on the non-trivial complexity of approximate-#CNFSAT; both of these have interesting implications about the complexity of (variations of) lattice problems, strong simulation and hitting set problem, and more. In the process, we explore the QSETH framework in greater detail than was (required and) discussed in the original paper, thus also serving as a useful guide on how to effectively use the QSETH framework.Comment: 34 pages, 2 tables, 2 figure

    Matrix Multiplication Verification Using Coding Theory

    Full text link
    We study the Matrix Multiplication Verification Problem (MMV) where the goal is, given three n×nn \times n matrices AA, BB, and CC as input, to decide whether AB=CAB = C. A classic randomized algorithm by Freivalds (MFCS, 1979) solves MMV in O~(n2)\widetilde{O}(n^2) time, and a longstanding challenge is to (partially) derandomize it while still running in faster than matrix multiplication time (i.e., in o(nω)o(n^{\omega}) time). To that end, we give two algorithms for MMV in the case where ABCAB - C is sparse. Specifically, when ABCAB - C has at most O(nδ)O(n^{\delta}) non-zero entries for a constant 0δ<20 \leq \delta < 2, we give (1) a deterministic O(nωε)O(n^{\omega - \varepsilon})-time algorithm for constant ε=ε(δ)>0\varepsilon = \varepsilon(\delta) > 0, and (2) a randomized O~(n2)\widetilde{O}(n^2)-time algorithm using δ/2log2n+O(1)\delta/2 \cdot \log_2 n + O(1) random bits. The former algorithm is faster than the deterministic algorithm of K\"{u}nnemann (ESA, 2018) when δ1.056\delta \geq 1.056, and the latter algorithm uses fewer random bits than the algorithm of Kimbrel and Sinha (IPL, 1993), which runs in the same time and uses log2n+O(1)\log_2 n + O(1) random bits (in turn fewer than Freivalds's algorithm). We additionally study the complexity of MMV. We first show that all algorithms in a natural class of deterministic linear algebraic algorithms for MMV (including ours) require Ω(nω)\Omega(n^{\omega}) time. We also show a barrier to proving a super-quadratic running time lower bound for matrix multiplication (and hence MMV) under the Strong Exponential Time Hypothesis (SETH). Finally, we study relationships between natural variants and special cases of MMV (with respect to deterministic O~(n2)\widetilde{O}(n^2)-time reductions)

    Fine-Grained Complexity: Exploring Reductions and their Properties

    Get PDF
    Η σχεδίαση αλγορίθμων αποτελεί ένα απο τα κύρια θέματα ενδιαφέροντος για τον τομέα της Πληροφορικής. Παρά τα πολλά αποτελέσματα σε ορισμένους τομείς, η προσέγγιση αυτή έχει πετύχει κάποια πρακτικά αδιέξοδα που έχουν αποδειχτεί προβληματικά στην πρόοδο του τομέα. Επίσης, οι κλασικές πρακτικές Υπολογιστικής Πολυπλοκότητας δεν ήταν σε θέση να παρακάμψουν αυτά τα εμπόδια. Η κατανόηση της δυσκολίας του κάθε προβλήματος δεν είναι τετριμμένη. Η Ραφιναρισμένη Πολυπλοκότητα παρέχει νέες προ-οπτικές για τα κλασικά προβλήματα, με αποτέλεσμα σταθερούς δεσμούς μεταξύ γνωστών εικασιών στην πολυπλοκότητα και την σχεδίαση αλγορίθμων. Χρησιμεύει επίσης ως εργα-λείο για να αποδείξει τα υπο όρους κατώτατα όρια για προβλήματα πολυωνυμικής χρονικής πολυπλοκότητας, ένα πεδίο που έχει σημειώσει πολύ λίγη πρόοδο μέχρι τώρα. Οι δημοφι-λείς υποθέσεις/παραδοχές όπως το SETH, το OVH, το 3SUM, και το APSP, δίνουν πολλά φράγματα που δεν έχουν ακόμα αποδειχθεί με κλασικές τεχνικές και παρέχουν μια νέα κατανόηση της δομής και της εντροπίας των προβλημάτων γενικά. Σκοπός αυτής της εργασίας είναι να συμβάλει στην εδραίωση του πλαισίου για αναγωγές από κάθε εικασία και να διερευνήσει την διαρθρωτική διαφορά μεταξύ των προβλημάτων σε κάθε περίπτωση.Algorithmic design has been one of the main subjects of interest for Computer science. While very effective in some areas, this approach has been met with some practical dead ends that have been very problematic in the progress of the field. Classical Computational Complexity practices have also not been able to bypass these blocks. Understanding the hardness of each problem is not trivial. Fine-Grained Complexity provides new perspectives on classic problems, resulting to solid links between famous conjectures in Complexity, and Algorithmic design. It serves as a tool to prove conditional lower bounds for problems with polynomial time complexity, a field that had seen very little progress until now. Popular conjectures such as SETH, k-OV, 3SUM, and APSP, imply many bounds that have yet to be proven using classic techniques, and provide a new understanding of the structure and entropy of problems in general. The aim of this thesis is to contribute towards solidifying the framework for reductions from each conjecture, and to explore the structural difference between the problems in each cas
    corecore