6,077 research outputs found

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    Pseudorandom generators and the BQP vs. PH problem

    Get PDF
    It is a longstanding open problem to devise an oracle relative to which BQP does not lie in the Polynomial-Time Hierarchy (PH). We advance a natural conjecture about the capacity of the Nisan-Wigderson pseudorandom generator [NW94] to fool AC_0, with MAJORITY as its hard function. Our conjecture is essentially that the loss due to the hybrid argument (which is a component of the standard proof from [NW94]) can be avoided in this setting. This is a question that has been asked previously in the pseudorandomness literature [BSW03]. We then make three main contributions: (1) We show that our conjecture implies the existence of an oracle relative to which BQP is not in the PH. This entails giving an explicit construction of unitary matrices, realizable by small quantum circuits, whose row-supports are "nearly-disjoint." (2) We give a simple framework (generalizing the setting of Aaronson [A10]) in which any efficiently quantumly computable unitary gives rise to a distribution that can be distinguished from the uniform distribution by an efficient quantum algorithm. When applied to the unitaries we construct, this framework yields a problem that can be solved quantumly, and which forms the basis for the desired oracle. (3) We prove that Aaronson's "GLN conjecture" [A10] implies our conjecture; our conjecture is thus formally easier to prove. The GLN conjecture was recently proved false for depth greater than 2 [A10a], but it remains open for depth 2. If true, the depth-2 version of either conjecture would imply an oracle relative to which BQP is not in AM, which is itself an outstanding open problem. Taken together, our results have the following interesting interpretation: they give an instantiation of the Nisan-Wigderson generator that can be broken by quantum computers, but not by the relevant modes of classical computation, if our conjecture is true.Comment: Updated in light of counterexample to the GLN conjectur

    Towards Human Computable Passwords

    Get PDF
    An interesting challenge for the cryptography community is to design authentication protocols that are so simple that a human can execute them without relying on a fully trusted computer. We propose several candidate authentication protocols for a setting in which the human user can only receive assistance from a semi-trusted computer --- a computer that stores information and performs computations correctly but does not provide confidentiality. Our schemes use a semi-trusted computer to store and display public challenges Ci[n]kC_i\in[n]^k. The human user memorizes a random secret mapping σ:[n]Zd\sigma:[n]\rightarrow\mathbb{Z}_d and authenticates by computing responses f(σ(Ci))f(\sigma(C_i)) to a sequence of public challenges where f:ZdkZdf:\mathbb{Z}_d^k\rightarrow\mathbb{Z}_d is a function that is easy for the human to evaluate. We prove that any statistical adversary needs to sample m=Ω~(ns(f))m=\tilde{\Omega}(n^{s(f)}) challenge-response pairs to recover σ\sigma, for a security parameter s(f)s(f) that depends on two key properties of ff. To obtain our results, we apply the general hypercontractivity theorem to lower bound the statistical dimension of the distribution over challenge-response pairs induced by ff and σ\sigma. Our lower bounds apply to arbitrary functions ff (not just to functions that are easy for a human to evaluate), and generalize recent results of Feldman et al. As an application, we propose a family of human computable password functions fk1,k2f_{k_1,k_2} in which the user needs to perform 2k1+2k2+12k_1+2k_2+1 primitive operations (e.g., adding two digits or remembering σ(i)\sigma(i)), and we show that s(f)=min{k1+1,(k2+1)/2}s(f) = \min\{k_1+1, (k_2+1)/2\}. For these schemes, we prove that forging passwords is equivalent to recovering the secret mapping. Thus, our human computable password schemes can maintain strong security guarantees even after an adversary has observed the user login to many different accounts.Comment: Fixed bug in definition of Q^{f,j} and modified proofs accordingl

    Dimension Extractors and Optimal Decompression

    Full text link
    A *dimension extractor* is an algorithm designed to increase the effective dimension -- i.e., the amount of computational randomness -- of an infinite binary sequence, in order to turn a "partially random" sequence into a "more random" sequence. Extractors are exhibited for various effective dimensions, including constructive, computable, space-bounded, time-bounded, and finite-state dimension. Using similar techniques, the Kucera-Gacs theorem is examined from the perspective of decompression, by showing that every infinite sequence S is Turing reducible to a Martin-Loef random sequence R such that the asymptotic number of bits of R needed to compute n bits of S, divided by n, is precisely the constructive dimension of S, which is shown to be the optimal ratio of query bits to computed bits achievable with Turing reductions. The extractors and decompressors that are developed lead directly to new characterizations of some effective dimensions in terms of optimal decompression by Turing reductions.Comment: This report was combined with a different conference paper "Every Sequence is Decompressible from a Random One" (cs.IT/0511074, at http://dx.doi.org/10.1007/11780342_17), and both titles were changed, with the conference paper incorporated as section 5 of this new combined paper. The combined paper was accepted to the journal Theory of Computing Systems, as part of a special issue of invited papers from the second conference on Computability in Europe, 200

    Derandomizing from Random Strings

    Full text link
    In this paper we show that BPP is truth-table reducible to the set of Kolmogorov random strings R_K. It was previously known that PSPACE, and hence BPP is Turing-reducible to R_K. The earlier proof relied on the adaptivity of the Turing-reduction to find a Kolmogorov-random string of polynomial length using the set R_K as oracle. Our new non-adaptive result relies on a new fundamental fact about the set R_K, namely each initial segment of the characteristic sequence of R_K is not compressible by recursive means. As a partial converse to our claim we show that strings of high Kolmogorov-complexity when used as advice are not much more useful than randomly chosen strings
    corecore