96 research outputs found

    Patterns hidden from simple algorithms

    Get PDF
    Is the number 9021960864034418159813 random? Educated opinions might vary from “No! No single string can be random,” to the more contemptuous ”Come on! Those are just the 714th to 733rd digits of π.” Yet, to my limited mind, the string did appear random. Is there a way to use some formal mathematics to justify my naĂŻvetĂ©? The modern theory of pseudorandomness indeed manages to explain such phenomena, where strings appear random to simple minds. The key, this theory argues, is that randomness is really in the “eyes of the beholder,” or rather in the computing power of the tester of randomness. More things appear random to simpler, or resource limited, algorithms than to complex, powerful, algorithms

    On Oblivious Amplification of Coin-Tossing Protocols

    Get PDF
    We consider the problem of amplifying two-party coin-tossing protocols: given a protocol where it is possible to bias the common output by at most ?, we aim to obtain a new protocol where the output can be biased by at most ?* < ?. We rule out the existence of a natural type of amplifiers called oblivious amplifiers for every ?* < ?. Such amplifiers ignore the way that the underlying ?-bias protocol works and can only invoke an oracle that provides ?-bias bits. We provide two proofs of this impossibility. The first is by a reduction to the impossibility of deterministic randomness extraction from Santha-Vazirani sources. The second is a direct proof that is more general and also rules outs certain types of asymmetric amplification. In addition, it gives yet another proof for the Santha-Vazirani impossibility

    Hardness Amplification of Optimization Problems

    Get PDF
    In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium

    Pseudorandom generators and the BQP vs. PH problem

    Get PDF
    It is a longstanding open problem to devise an oracle relative to which BQP does not lie in the Polynomial-Time Hierarchy (PH). We advance a natural conjecture about the capacity of the Nisan-Wigderson pseudorandom generator [NW94] to fool AC_0, with MAJORITY as its hard function. Our conjecture is essentially that the loss due to the hybrid argument (which is a component of the standard proof from [NW94]) can be avoided in this setting. This is a question that has been asked previously in the pseudorandomness literature [BSW03]. We then make three main contributions: (1) We show that our conjecture implies the existence of an oracle relative to which BQP is not in the PH. This entails giving an explicit construction of unitary matrices, realizable by small quantum circuits, whose row-supports are "nearly-disjoint." (2) We give a simple framework (generalizing the setting of Aaronson [A10]) in which any efficiently quantumly computable unitary gives rise to a distribution that can be distinguished from the uniform distribution by an efficient quantum algorithm. When applied to the unitaries we construct, this framework yields a problem that can be solved quantumly, and which forms the basis for the desired oracle. (3) We prove that Aaronson's "GLN conjecture" [A10] implies our conjecture; our conjecture is thus formally easier to prove. The GLN conjecture was recently proved false for depth greater than 2 [A10a], but it remains open for depth 2. If true, the depth-2 version of either conjecture would imply an oracle relative to which BQP is not in AM, which is itself an outstanding open problem. Taken together, our results have the following interesting interpretation: they give an instantiation of the Nisan-Wigderson generator that can be broken by quantum computers, but not by the relevant modes of classical computation, if our conjecture is true.Comment: Updated in light of counterexample to the GLN conjectur

    Simulation Theorems via Pseudorandom Properties

    Full text link
    We generalize the deterministic simulation theorem of Raz and McKenzie [RM99], to any gadget which satisfies certain hitting property. We prove that inner-product and gap-Hamming satisfy this property, and as a corollary we obtain deterministic simulation theorem for these gadgets, where the gadget's input-size is logarithmic in the input-size of the outer function. This answers an open question posed by G\"{o}\"{o}s, Pitassi and Watson [GPW15]. Our result also implies the previous results for the Indexing gadget, with better parameters than was previously known. A preliminary version of the results obtained in this work appeared in [CKL+17]

    Computational Indistinguishability between Quantum States and Its Cryptographic Application

    Full text link
    We introduce a computational problem of distinguishing between two specific quantum states as a new cryptographic problem to design a quantum cryptographic scheme that is "secure" against any polynomial-time quantum adversary. Our problem, QSCDff, is to distinguish between two types of random coset states with a hidden permutation over the symmetric group of finite degree. This naturally generalizes the commonly-used distinction problem between two probability distributions in computational cryptography. As our major contribution, we show that QSCDff has three properties of cryptographic interest: (i) QSCDff has a trapdoor; (ii) the average-case hardness of QSCDff coincides with its worst-case hardness; and (iii) QSCDff is computationally at least as hard as the graph automorphism problem in the worst case. These cryptographic properties enable us to construct a quantum public-key cryptosystem, which is likely to withstand any chosen plaintext attack of a polynomial-time quantum adversary. We further discuss a generalization of QSCDff, called QSCDcyc, and introduce a multi-bit encryption scheme that relies on similar cryptographic properties of QSCDcyc.Comment: 24 pages, 2 figures. We improved presentation, and added more detail proofs and follow-up of recent wor

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≀ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≀ m/100, and L0 is the same as L f is the Majority function, and t ≀ m/100 f is the OR function, t ≀ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    On Nonadaptive Security Reductions of Hitting Set Generators

    Get PDF
    One of the central open questions in the theory of average-case complexity is to establish the equivalence between the worst-case and average-case complexity of the Polynomial-time Hierarchy (PH). One general approach is to show that there exists a PH-computable hitting set generator whose security is based on some NP-hard problem. We present the limits of such an approach, by showing that there exists no exponential-time-computable hitting set generator whose security can be proved by using a nonadaptive randomized polynomial-time reduction from any problem outside AM ? coAM, which significantly improves the previous upper bound BPP^NP of Gutfreund and Vadhan (RANDOM/APPROX 2008 [Gutfreund and Vadhan, 2008]). In particular, any security proof of a hitting set generator based on some NP-hard problem must use either an adaptive or non-black-box reduction (unless the polynomial-time hierarchy collapses). To the best of our knowledge, this is the first result that shows limits of black-box reductions from an NP-hard problem to some form of a distributional problem in DistPH. Based on our results, we argue that the recent worst-case to average-case reduction of Hirahara (FOCS 2018 [Hirahara, 2018]) is inherently non-black-box, without relying on any unproven assumptions. On the other hand, combining the non-black-box reduction with our simulation technique of black-box reductions, we exhibit the existence of a "non-black-box selector" for GapMCSP, i.e., an efficient algorithm that solves GapMCSP given as advice two circuits one of which is guaranteed to compute GapMCSP
    • 

    corecore