237 research outputs found

    Fine-Grained Complexity: Exploring Reductions and their Properties

    Get PDF
    Η σχεδίαση αλγορίθμων αποτελεί ένα απο τα κύρια θέματα ενδιαφέροντος για τον τομέα της Πληροφορικής. Παρά τα πολλά αποτελέσματα σε ορισμένους τομείς, η προσέγγιση αυτή έχει πετύχει κάποια πρακτικά αδιέξοδα που έχουν αποδειχτεί προβληματικά στην πρόοδο του τομέα. Επίσης, οι κλασικές πρακτικές Υπολογιστικής Πολυπλοκότητας δεν ήταν σε θέση να παρακάμψουν αυτά τα εμπόδια. Η κατανόηση της δυσκολίας του κάθε προβλήματος δεν είναι τετριμμένη. Η Ραφιναρισμένη Πολυπλοκότητα παρέχει νέες προ-οπτικές για τα κλασικά προβλήματα, με αποτέλεσμα σταθερούς δεσμούς μεταξύ γνωστών εικασιών στην πολυπλοκότητα και την σχεδίαση αλγορίθμων. Χρησιμεύει επίσης ως εργα-λείο για να αποδείξει τα υπο όρους κατώτατα όρια για προβλήματα πολυωνυμικής χρονικής πολυπλοκότητας, ένα πεδίο που έχει σημειώσει πολύ λίγη πρόοδο μέχρι τώρα. Οι δημοφι-λείς υποθέσεις/παραδοχές όπως το SETH, το OVH, το 3SUM, και το APSP, δίνουν πολλά φράγματα που δεν έχουν ακόμα αποδειχθεί με κλασικές τεχνικές και παρέχουν μια νέα κατανόηση της δομής και της εντροπίας των προβλημάτων γενικά. Σκοπός αυτής της εργασίας είναι να συμβάλει στην εδραίωση του πλαισίου για αναγωγές από κάθε εικασία και να διερευνήσει την διαρθρωτική διαφορά μεταξύ των προβλημάτων σε κάθε περίπτωση.Algorithmic design has been one of the main subjects of interest for Computer science. While very effective in some areas, this approach has been met with some practical dead ends that have been very problematic in the progress of the field. Classical Computational Complexity practices have also not been able to bypass these blocks. Understanding the hardness of each problem is not trivial. Fine-Grained Complexity provides new perspectives on classic problems, resulting to solid links between famous conjectures in Complexity, and Algorithmic design. It serves as a tool to prove conditional lower bounds for problems with polynomial time complexity, a field that had seen very little progress until now. Popular conjectures such as SETH, k-OV, 3SUM, and APSP, imply many bounds that have yet to be proven using classic techniques, and provide a new understanding of the structure and entropy of problems in general. The aim of this thesis is to contribute towards solidifying the framework for reductions from each conjecture, and to explore the structural difference between the problems in each cas

    Deterministic Time-Space Tradeoffs for k-SUM

    Get PDF
    Given a set of numbers, the kk-SUM problem asks for a subset of kk numbers that sums to zero. When the numbers are integers, the time and space complexity of kk-SUM is generally studied in the word-RAM model; when the numbers are reals, the complexity is studied in the real-RAM model, and space is measured by the number of reals held in memory at any point. We present a time and space efficient deterministic self-reduction for the kk-SUM problem which holds for both models, and has many interesting consequences. To illustrate: * 33-SUM is in deterministic time O(n2lglg(n)/lg(n))O(n^2 \lg\lg(n)/\lg(n)) and space O(nlg(n)lglg(n))O\left(\sqrt{\frac{n \lg(n)}{\lg\lg(n)}}\right). In general, any polylogarithmic-time improvement over quadratic time for 33-SUM can be converted into an algorithm with an identical time improvement but low space complexity as well. * 33-SUM is in deterministic time O(n2)O(n^2) and space O(n)O(\sqrt n), derandomizing an algorithm of Wang. * A popular conjecture states that 3-SUM requires n2o(1)n^{2-o(1)} time on the word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the (seemingly weaker) conjecture that every O(n.51)O(n^{.51})-space algorithm for 33-SUM requires at least n2o(1)n^{2-o(1)} time on the word-RAM. * For k4k \ge 4, kk-SUM is in deterministic O(nk2+2/k)O(n^{k - 2 + 2/k}) time and O(n)O(\sqrt{n}) space

    Graphs Cannot Be Indexed in Polynomial Time for Sub-quadratic Time String Matching, Unless SETH Fails

    Get PDF
    The string matching problem on a node-labeled graph G= (V, E) asks whether a given pattern string P has an occurrence in G, in the form of a path whose concatenation of node labels equals P. This is a basic primitive in various problems in bioinformatics, graph databases, or networks, but only recently proven to have a O(|E||P|)-time lower bound, under the Orthogonal Vectors Hypothesis (OVH). We consider here its indexed version, in which we can index the graph in order to support time-efficient string queries. We show that, under OVH, no polynomial-time indexing scheme of the graph can support querying P in time O(| P| + | E| δ| P| β), with either δ< 1 or β< 1. As a side-contribution, we introduce the notion of linear independent-components (lic) reduction, allowing for a simple proof of our result. As another illustration that hardness of indexing follows as a corollary of a lic reduction, we also translate the quadratic conditional lower bound of Backurs and Indyk (STOC 2015) for the problem of matching a query string inside a text, under edit distance. We obtain an analogous tight quadratic lower bound for its indexed version, improving the recent result of Cohen-Addad, Feuilloley and Starikovskaya (SODA 2019), but with a slightly different boundary condition.Peer reviewe

    Circuit complexity, proof complexity, and polynomial identity testing

    Full text link
    We introduce a new algebraic proof system, which has tight connections to (algebraic) circuit complexity. In particular, we show that any super-polynomial lower bound on any Boolean tautology in our proof system implies that the permanent does not have polynomial-size algebraic circuits (VNP is not equal to VP). As a corollary to the proof, we also show that super-polynomial lower bounds on the number of lines in Polynomial Calculus proofs (as opposed to the usual measure of number of monomials) imply the Permanent versus Determinant Conjecture. Note that, prior to our work, there was no proof system for which lower bounds on an arbitrary tautology implied any computational lower bound. Our proof system helps clarify the relationships between previous algebraic proof systems, and begins to shed light on why proof complexity lower bounds for various proof systems have been so much harder than lower bounds on the corresponding circuit classes. In doing so, we highlight the importance of polynomial identity testing (PIT) for understanding proof complexity. More specifically, we introduce certain propositional axioms satisfied by any Boolean circuit computing PIT. We use these PIT axioms to shed light on AC^0[p]-Frege lower bounds, which have been open for nearly 30 years, with no satisfactory explanation as to their apparent difficulty. We show that either: a) Proving super-polynomial lower bounds on AC^0[p]-Frege implies VNP does not have polynomial-size circuits of depth d - a notoriously open question for d at least 4 - thus explaining the difficulty of lower bounds on AC^0[p]-Frege, or b) AC^0[p]-Frege cannot efficiently prove the depth d PIT axioms, and hence we have a lower bound on AC^0[p]-Frege. Using the algebraic structure of our proof system, we propose a novel way to extend techniques from algebraic circuit complexity to prove lower bounds in proof complexity

    Which Regular Languages can be Efficiently Indexed?

    Full text link
    In the present work, we tackle the regular language indexing problem by first studying the hierarchy of pp-sortable languages: regular languages accepted by automata of width pp. We show that the hierarchy is strict and does not collapse, and provide (exponential in pp) upper and lower bounds relating the minimum widths of equivalent NFAs and DFAs. Our bounds indicate the importance of being able to index NFAs, as they enable indexing regular languages with much faster and smaller indexes. Our second contribution solves precisely this problem, optimally: we devise a polynomial-time algorithm that indexes any NFA with the optimal value pp for its width, without explicitly computing pp (NP-hard to find). In particular, this implies that we can index in polynomial time the well-studied case p=1p=1 (Wheeler NFAs). More in general, in polynomial time we can build an index breaking the worst-case conditional lower bound of Ω(Pm)\Omega(|P| m), whenever the input NFA's width is po(m)p \in o(\sqrt{m}).Comment: Extended versio

    Subpath Queries on Compressed Graphs: A Survey

    Get PDF
    Text indexing is a classical algorithmic problem that has been studied for over four decades: given a text T, pre-process it off-line so that, later, we can quickly count and locate the occurrences of any string (the query pattern) in T in time proportional to the query’s length. The earliest optimal-time solution to the problem, the suffix tree, dates back to 1973 and requires up to two orders of magnitude more space than the plain text just to be stored. In the year 2000, two breakthrough works showed that efficient queries can be achieved without this space overhead: a fast index be stored in a space proportional to the text’s entropy. These contributions had an enormous impact in bioinformatics: today, virtually any DNA aligner employs compressed indexes. Recent trends considered more powerful compression schemes (dictionary compressors) and generalizations of the problem to labeled graphs: after all, texts can be viewed as labeled directed paths. In turn, since finite state automata can be considered as a particular case of labeled graphs, these findings created a bridge between the fields of compressed indexing and regular language theory, ultimately allowing to index regular languages and promising to shed new light on problems, such as regular expression matching. This survey is a gentle introduction to the main landmarks of the fascinating journey that took us from suffix trees to today’s compressed indexes for labeled graphs and regular languages

    Progress Report : 1991 - 1994

    Get PDF
    corecore