10 research outputs found

    The Value of Help Bits in Randomized and Average-Case Complexity

    Full text link
    "Help bits" are some limited trusted information about an instance or instances of a computational problem that may reduce the computational complexity of solving that instance or instances. In this paper, we study the value of help bits in the settings of randomized and average-case complexity. Amir, Beigel, and Gasarch (1990) show that for constant kk, if kk instances of a decision problem can be efficiently solved using less than kk bits of help, then the problem is in P/poly. We extend this result to the setting of randomized computation: We show that the decision problem is in P/poly if using \ell help bits, kk instances of the problem can be efficiently solved with probability greater than 2k2^{\ell-k}. The same result holds if using less than k(1h(α))k(1 - h(\alpha)) help bits (where h()h(\cdot) is the binary entropy function), we can efficiently solve (1α)(1-\alpha) fraction of the instances correctly with non-vanishing probability. We also extend these two results to non-constant but logarithmic kk. In this case however, instead of showing that the problem is in P/poly we show that it satisfies "kk-membership comparability," a notion known to be related to solving kk instances using less than kk bits of help. Next we consider the setting of average-case complexity: Assume that we can solve kk instances of a decision problem using some help bits whose entropy is less than kk when the kk instances are drawn independently from a particular distribution. Then we can efficiently solve an instance drawn from that distribution with probability better than 1/21/2. Finally, we show that in the case where kk is super-logarithmic, assuming kk-membership comparability of a decision problem, one cannot prove that the problem is in P/poly by a "black-box proof.

    A PCP Characterization of AM

    Get PDF
    We introduce a 2-round stochastic constraint-satisfaction problem, and show that its approximation version is complete for (the promise version of) the complexity class AM. This gives a `PCP characterization' of AM analogous to the PCP Theorem for NP. Similar characterizations have been given for higher levels of the Polynomial Hierarchy, and for PSPACE; however, we suggest that the result for AM might be of particular significance for attempts to derandomize this class. To test this notion, we pose some `Randomized Optimization Hypotheses' related to our stochastic CSPs that (in light of our result) would imply collapse results for AM. Unfortunately, the hypotheses appear over-strong, and we present evidence against them. In the process we show that, if some language in NP is hard-on-average against circuits of size 2^{Omega(n)}, then there exist hard-on-average optimization problems of a particularly elegant form. All our proofs use a powerful form of PCPs known as Probabilistically Checkable Proofs of Proximity, and demonstrate their versatility. We also use known results on randomness-efficient soundness- and hardness-amplification. In particular, we make essential use of the Impagliazzo-Wigderson generator; our analysis relies on a recent Chernoff-type theorem for expander walks.Comment: 18 page

    Pseudorandomness for Regular Branching Programs via Fourier Analysis

    Full text link
    We present an explicit pseudorandom generator for oblivious, read-once, permutation branching programs of constant width that can read their input bits in any order. The seed length is O(log2n)O(\log^2 n), where nn is the length of the branching program. The previous best seed length known for this model was n1/2+o(1)n^{1/2+o(1)}, which follows as a special case of a generator due to Impagliazzo, Meka, and Zuckerman (FOCS 2012) (which gives a seed length of s1/2+o(1)s^{1/2+o(1)} for arbitrary branching programs of size ss). Our techniques also give seed length n1/2+o(1)n^{1/2+o(1)} for general oblivious, read-once branching programs of width 2no(1)2^{n^{o(1)}}, which is incomparable to the results of Impagliazzo et al.Our pseudorandom generator is similar to the one used by Gopalan et al. (FOCS 2012) for read-once CNFs, but the analysis is quite different; ours is based on Fourier analysis of branching programs. In particular, we show that an oblivious, read-once, regular branching program of width ww has Fourier mass at most (2w2)k(2w^2)^k at level kk, independent of the length of the program.Comment: RANDOM 201

    Better Pseudorandom Generators from Milder Pseudorandom Restrictions

    Full text link
    We present an iterative approach to constructing pseudorandom generators, based on the repeated application of mild pseudorandom restrictions. We use this template to construct pseudorandom generators for combinatorial rectangles and read-once CNFs and a hitting set generator for width-3 branching programs, all of which achieve near-optimal seed-length even in the low-error regime: We get seed-length O(log (n/epsilon)) for error epsilon. Previously, only constructions with seed-length O(\log^{3/2} n) or O(\log^2 n) were known for these classes with polynomially small error. The (pseudo)random restrictions we use are milder than those typically used for proving circuit lower bounds in that we only set a constant fraction of the bits at a time. While such restrictions do not simplify the functions drastically, we show that they can be derandomized using small-bias spaces.Comment: To appear in FOCS 201

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    Hardness Amplification Proofs Require Majority

    Full text link

    Using Nondeterminism to Amplify Hardness

    No full text
    We revisit the problem of hardness amplification in N P, as recently studied by O’Donnell (STOC ‘02). We prove that if N P has a balanced function f such that any circuit of size s(n) fails to compute f on a 1 / poly(n) fraction of inputs, then N P has a function f ′ such that any circuit of size s ′ (n) = s ( √ n) Ω(1) fails to compute f ′ on a 1/2−1/s ′ (n) fraction of inputs. In particular, 1. If s(n) = n ω(1) , we amplify to hardness 1/2 − 1/n ω(1). 2. If s(n) = 2 nΩ(1), we amplify to hardness 1/2−1/2 nΩ(1) 3. If s(n) = 2 Ω(n) , we amplify to hardness 1/2−1/2 Ω( √ n). These improve the results of O’Donnell, which only amplified to 1/2 − 1 / √ n. O’Donnell also proved that no construction of a certain general form could amplify beyond 1/2 − 1/n. We bypass this barrier by using both derandomization and nondeterminism in the construction of f ′. We also prove impossibility results demonstrating that both our use of nondeterminism and the hypothesis that f is balanced are necessary for “black-box ” hardness amplification procedures (such as ours)
    corecore