93 research outputs found

    Separating Cook Completeness from Karp-Levin Completeness Under a Worst-Case Hardness Hypothesis

    Get PDF
    We show that there is a language that is Turing complete for NP but not many-one complete for NP, under a worst-case hardness hypothesis. Our hypothesis asserts the existence of a non-deterministic, double-exponential time machine that runs in time O(2^2^n^c) (for some c > 1) accepting Sigma^* whose accepting computations cannot be computed by bounded-error, probabilistic machines running in time O(2^2^{beta * 2^n^c) (for some beta > 0). This is the first result that separates completeness notions for NP under a worst-case hardness hypothesis

    Randomness in completeness and space-bounded computations

    Get PDF
    The study of computational complexity investigates the role of various computational resources such as processing time, memory requirements, nondeterminism, randomness, nonuniformity, etc. to solve different types of computational problems. In this dissertation, we study the role of randomness in two fundamental areas of computational complexity: NP-completeness and space-bounded computations. The concept of completeness plays an important role in defining the notion of \u27hard\u27 problems in Computer Science. Intuitively, an NP-complete problem captures the difficulty of solving any problem in NP. Polynomial-time reductions are at the heart of defining completeness. However, there is no single notion of reduction; researchers identified various polynomial-time reductions such as many-one reduction, truth-table reduction, Turing reduction, etc. Each such notion of reduction induces a notion of completeness. Finding the relationships among various NP-completeness notions is a significant open problem. Our first result is about the separation of two such polynomial-time completeness notions for NP, namely, Turing completeness and many-one completeness. This is the first result that separates completeness notions for NP under a worst-case hardness hypothesis. Our next result involves a conjecture by Even, Selman, and Yacobi [ESY84,SY82] which states that there do not exist disjoint NP-pairs all of whose separators are NP-hard via Turing reductions. If true, this conjecture implies that a certain kind of probabilistic public-key cryptosystems is not secure. The conjecture is open for 30 years. We provide evidence in support of a variant of this conjecture. We show that if there exist certain secure one-way functions, then the ESY conjecture for the bounded-truth-table reduction holds. Now we turn our attention to space-bounded computations. We investigate probabilistic space-bounded machines that are allowed to access their random bits {\em multiple times}. Our main conceptual contribution here is to establish an interesting connection between derandomization of such probabilistic space-bounded machines and the derandomization of probabilistic time-bounded machines. In particular, we show that if we can derandomize a multipass machine even with a small number of passes over random tape and only O(log^2 n) random bits to deterministic polynomial-time, then BPTIME(n) ⊆ DTIME(2^{o(n)}). Note that if we restrict the number of random bits to O(log n), then we can trivially derandomize the machine to polynomial time. Furthermore, it can be shown that if we restrict the number of passes to O(1), we can still derandomize the machine to polynomial time. Thus our result implies that any extension beyond these trivialities will lead to an unknown derandomization of BPTIME(n). Our final contribution is about the derandomization of probabilistic time-bounded machines under branching program lower bounds. The standard method of derandomizing time-bounded probabilistic machines depends on various circuit lower bounds, which are notoriously hard to prove. We show that the derandomization of low-degree polynomial identity testing, a well-known problem in co-RP, can be obtained under certain branching program lower bounds. Note that branching programs are considered weaker model of computation than the Boolean circuits

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    Nonuniform Reductions and NP-Completeness

    Get PDF
    Nonuniformity is a central concept in computational complexity with powerful connections to circuit complexity and randomness. Nonuniform reductions have been used to study the isomorphism conjecture for NP and completeness for larger complexity classes. We study the power of nonuniform reductions for NP0completeness, obtaining both separations and upper bounds for nonuniform completeness vs uniform complessness in NP. Under various hypotheses, we obtain the following separations: 1. There is a set complete for NP under nonuniform many-one reductions, but not under uniform many-one reductions. This is true even with a single bit of nonuniform advice. 2. There is a set complete for NP under nonuniform many-one reductions with polynomial-size advice, but not under uniform Turing reductions. That is, polynomial nonuniformity is stronger than a polynomial number of queries. 3. For any fixed polynomial p(n), there is a set complete for NP under uniform 2-truth-table reductions, but not under nonuniform many-one reductions that use p(n) advice. That is, giving a uniform reduction a second query makes it more powerful than a nonuniform reduction with fixed polynomial advice. 4. There is a set complete for NP under nonuniform many-one reductions with polynomial ad- vice, but not under nonuniform many-one reductions with logarithmic advice. This hierarchy theorem also holds for other reducibilities, such as truth-table and Turing. We also consider uniform upper bounds on nonuniform completeness. Hirahara (2015) showed that unconditionally every set that is complete for NP under nonuniform truth-table reductions that use logarithmic advice is also uniformly Turing-complete. We show that under a derandomization hypothesis, the same statement for truth-table reductions and truth-table completeness also holds

    A Casual Tour Around a Circuit Complexity Bound

    Full text link
    I will discuss the recent proof that the complexity class NEXP (nondeterministic exponential time) lacks nonuniform ACC circuits of polynomial size. The proof will be described from the perspective of someone trying to discover it.Comment: 21 pages, 2 figures. An earlier version appeared in SIGACT News, September 201

    Complexity of certificates, heuristics, and counting types , with applications to cryptography and circuit theory

    Get PDF
    In dieser Habilitationsschrift werden Struktur und Eigenschaften von Komplexitätsklassen wie P und NP untersucht, vor allem im Hinblick auf: Zertifikatkomplexität, Einwegfunktionen, Heuristiken gegen NP-Vollständigkeit und Zählkomplexität. Zum letzten Punkt werden speziell untersucht: (a) die Komplexität von Zähleigenschaften von Schaltkreisen, (b) Separationen von Zählklassen mit Immunität und (c) die Komplexität des Zählens der Lösungen von ,,tally`` NP-Problemen

    IST Austria Thesis

    Get PDF
    A search problem lies in the complexity class FNP if a solution to the given instance of the problem can be verified efficiently. The complexity class TFNP consists of all search problems in FNP that are total in the sense that a solution is guaranteed to exist. TFNP contains a host of interesting problems from fields such as algorithmic game theory, computational topology, number theory and combinatorics. Since TFNP is a semantic class, it is unlikely to have a complete problem. Instead, one studies its syntactic subclasses which are defined based on the combinatorial principle used to argue totality. Of particular interest is the subclass PPAD, which contains important problems like computing Nash equilibrium for bimatrix games and computational counterparts of several fixed-point theorems as complete. In the thesis, we undertake the study of averagecase hardness of TFNP, and in particular its subclass PPAD. Almost nothing was known about average-case hardness of PPAD before a series of recent results showed how to achieve it using a cryptographic primitive called program obfuscation. However, it is currently not known how to construct program obfuscation from standard cryptographic assumptions. Therefore, it is desirable to relax the assumption under which average-case hardness of PPAD can be shown. In the thesis we take a step in this direction. First, we show that assuming the (average-case) hardness of a numbertheoretic problem related to factoring of integers, which we call Iterated-Squaring, PPAD is hard-on-average in the random-oracle model. Then we strengthen this result to show that the average-case hardness of PPAD reduces to the (adaptive) soundness of the Fiat-Shamir Transform, a well-known technique used to compile a public-coin interactive protocol into a non-interactive one. As a corollary, we obtain average-case hardness for PPAD in the random-oracle model assuming the worst-case hardness of #SAT. Moreover, the above results can all be strengthened to obtain average-case hardness for the class CLS ⊆ PPAD. Our main technical contribution is constructing incrementally-verifiable procedures for computing Iterated-Squaring and #SAT. By incrementally-verifiable, we mean that every intermediate state of the computation includes a proof of its correctness, and the proof can be updated and verified in polynomial time. Previous constructions of such procedures relied on strong, non-standard assumptions. Instead, we introduce a technique called recursive proof-merging to obtain the same from weaker assumptions

    Approximation, Proof Systems, and Correlations in a Quantum World

    Get PDF
    This thesis studies three topics in quantum computation and information: The approximability of quantum problems, quantum proof systems, and non-classical correlations in quantum systems. In the first area, we demonstrate a polynomial-time (classical) approximation algorithm for dense instances of the canonical QMA-complete quantum constraint satisfaction problem, the local Hamiltonian problem. In the opposite direction, we next introduce a quantum generalization of the polynomial-time hierarchy, and define problems which we prove are not only complete for the second level of this hierarchy, but are in fact hard to approximate. In the second area, we study variants of the interesting and stubbornly open question of whether a quantum proof system with multiple unentangled quantum provers is equal in expressive power to a proof system with a single quantum prover. Our results concern classes such as BellQMA(poly), and include a novel proof of perfect parallel repetition for SepQMA(m) based on cone programming duality. In the third area, we study non-classical quantum correlations beyond entanglement, often dubbed "non-classicality". Among our results are two novel schemes for quantifying non-classicality: The first proposes the new paradigm of exploiting local unitary operations to study non-classical correlations, and the second introduces a protocol through which non-classical correlations in a starting system can be "activated" into distillable entanglement with an ancilla system. An introduction to all required linear algebra and quantum mechanics is included.Comment: PhD Thesis, 240 page

    Fine-Grained Complexity: Exploring Reductions and their Properties

    Get PDF
    Η σχεδίαση αλγορίθμων αποτελεί ένα απο τα κύρια θέματα ενδιαφέροντος για τον τομέα της Πληροφορικής. Παρά τα πολλά αποτελέσματα σε ορισμένους τομείς, η προσέγγιση αυτή έχει πετύχει κάποια πρακτικά αδιέξοδα που έχουν αποδειχτεί προβληματικά στην πρόοδο του τομέα. Επίσης, οι κλασικές πρακτικές Υπολογιστικής Πολυπλοκότητας δεν ήταν σε θέση να παρακάμψουν αυτά τα εμπόδια. Η κατανόηση της δυσκολίας του κάθε προβλήματος δεν είναι τετριμμένη. Η Ραφιναρισμένη Πολυπλοκότητα παρέχει νέες προ-οπτικές για τα κλασικά προβλήματα, με αποτέλεσμα σταθερούς δεσμούς μεταξύ γνωστών εικασιών στην πολυπλοκότητα και την σχεδίαση αλγορίθμων. Χρησιμεύει επίσης ως εργα-λείο για να αποδείξει τα υπο όρους κατώτατα όρια για προβλήματα πολυωνυμικής χρονικής πολυπλοκότητας, ένα πεδίο που έχει σημειώσει πολύ λίγη πρόοδο μέχρι τώρα. Οι δημοφι-λείς υποθέσεις/παραδοχές όπως το SETH, το OVH, το 3SUM, και το APSP, δίνουν πολλά φράγματα που δεν έχουν ακόμα αποδειχθεί με κλασικές τεχνικές και παρέχουν μια νέα κατανόηση της δομής και της εντροπίας των προβλημάτων γενικά. Σκοπός αυτής της εργασίας είναι να συμβάλει στην εδραίωση του πλαισίου για αναγωγές από κάθε εικασία και να διερευνήσει την διαρθρωτική διαφορά μεταξύ των προβλημάτων σε κάθε περίπτωση.Algorithmic design has been one of the main subjects of interest for Computer science. While very effective in some areas, this approach has been met with some practical dead ends that have been very problematic in the progress of the field. Classical Computational Complexity practices have also not been able to bypass these blocks. Understanding the hardness of each problem is not trivial. Fine-Grained Complexity provides new perspectives on classic problems, resulting to solid links between famous conjectures in Complexity, and Algorithmic design. It serves as a tool to prove conditional lower bounds for problems with polynomial time complexity, a field that had seen very little progress until now. Popular conjectures such as SETH, k-OV, 3SUM, and APSP, imply many bounds that have yet to be proven using classic techniques, and provide a new understanding of the structure and entropy of problems in general. The aim of this thesis is to contribute towards solidifying the framework for reductions from each conjecture, and to explore the structural difference between the problems in each cas

    Martingale families and dimension in P

    Get PDF
    AbstractWe introduce a new measure notion on small complexity classes (called F-measure), based on martingale families, that gets rid of some drawbacks of previous measure notions: it can be used to define dimension because martingale families can make money on all strings, and it yields random sequences with an equal frequency of 0’s and 1’s. On larger complexity classes (E and above), F-measure is equivalent to Lutz resource-bounded measure. As applications to F-measure, we answer a question raised in [E. Allender, M. Strauss, Measure on small complexity classes, with application for BPP, in: Proc. of the 35th Ann. IEEE Symp. on Found. of Comp. Sci., 1994, pp. 807–818] by improving their result to: for almost every language A decidable in subexponential time, PA=BPPA. We show that almost all languages in PSPACE do not have small non-uniform complexity. We compare F-measure to previous notions and prove that martingale families are strictly stronger than Γ-measure [E. Allender, M. Strauss, Measure on small complexity classes, with application for BPP, in: Proc. of the 35th Ann. IEEE Symp. on Found. of Comp. Sci., 1994, pp. 807–818], we also discuss the limitations of martingale families concerning finite unions. We observe that all classes closed under polynomial many-one reductions have measure zero in EXP iff they have measure zero in SUBEXP. We use martingale families to introduce a natural generalization of Lutz resource-bounded dimension [J.H. Lutz, Dimension in complexity classes, in: Proceedings of the 15th Annual IEEE Conference on Computational Complexity, 2000, pp. 158–169] on P, which meets the intuition behind Lutz’s notion. We show that P-dimension lies between finite-state dimension and dimension on E. We prove an analogue of a Theorem of Eggleston in P, i.e. the class of languages whose characteristic sequence contains 1’s with frequency α, has dimension the Shannon entropy of α in P
    corecore