70 research outputs found
Pseudorandomness for Approximate Counting and Sampling
We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions.
Our main technical contribution allows one to âboostâ a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent.
We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the âboostingâ theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM.
We observe that Cai's proof that S_2^P â PPâ(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice
Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels
A stochastic code is a pair of encoding and decoding procedures where Encoding procedure receives a k bit message m, and a d bit uniform string S. The code is (p,L)-list-decodable against a class C of "channel functions" from n bits to n bits, if for every message m and every channel C in C that induces at most errors, applying decoding on the "received word" C(Enc(m,S)) produces a list of at most L messages that contain m with high probability (over the choice of uniform S). Note that both the channel C and the decoding algorithm Dec do not receive the random variable S. The rate of a code is the ratio between the message length and the encoding length, and a code is explicit if Enc, Dec run in time poly(n).
Guruswami and Smith (J. ACM, to appear), showed that for every constants 0 1 there are Monte-Carlo explicit constructions of stochastic codes with rate R >= 1-H(p)-epsilon that are (p,L=poly(1/epsilon))-list decodable for size n^c channels. Monte-Carlo, means that the encoding and decoding need to share a public uniformly chosen poly(n^c) bit string Y, and the constructed stochastic code is (p,L)-list decodable with high probability over the choice of Y.
Guruswami and Smith pose an open problem to give fully explicit (that is not Monte-Carlo) explicit codes with the same parameters, under hardness assumptions. In this paper we resolve this open problem, using a minimal assumption: the existence of poly-time computable pseudorandom generators for small circuits, which follows from standard complexity assumptions by Impagliazzo and Wigderson (STOC 97).
Guruswami and Smith also asked to give a fully explicit unconditional constructions with the same parameters against O(log n)-space online channels. (These are channels that have space O(log n) and are allowed to read the input codeword in one pass). We resolve this open problem.
Finally, we consider a tighter notion of explicitness, in which the running time of encoding and list-decoding algorithms does not increase, when increasing the complexity of the channel. We give explicit constructions (with rate approaching 1-H(p) for every p 0) for channels that are circuits of size 2^{n^{Omega(1/d)}} and depth d. Here, the running time of encoding and decoding is a fixed polynomial (that does not depend on d).
Our approach builds on the machinery developed by Guruswami and Smith, replacing some probabilistic arguments with explicit constructions. We also present a simplified and general approach that makes the reductions in the proof more efficient, so that we can handle weak classes of channels
On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization
The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r ? {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < ? < 1 < B, and functions computable in time 2^{B ? n} that cannot be computed by circuits of size 2^{? ? n}.
Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))? log m, under qualitatively stronger assumptions.
We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which ? = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer:
- The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor).
To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions.
- The construction of Chen and Tell composes two PRGs: G?:{0,1}^{(1+o(1)) ? log m} ? {0,1}^{r? = m^{?(1)}} and G?:{0,1}^{r?} ? {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make ?(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G? from the extreme high-end hardness assumption.
The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms
Mining Circuit Lower Bound Proofs for Meta-algorithms
We show that circuit lower bound proofs based on the method of random restrictions yield non-trivial compression algorithms for âeasy â Boolean functions from the corresponding circuit classes. The compression problem is defined as follows: given the truth table of an n-variate Boolean function f computable by some unknown small circuit from a known class of circuits, find in deterministic time poly(2n) a circuit C (no restriction on the type of C) computing f so that the size of C is less than the trivial circuit size 2n/n. We get non-trivial compression for functions computable by AC0 circuits, (de Morgan) formulas, and (read-once) branching programs of the size for which the lower bounds for the corresponding circuit class are known. These compression algorithms rely on the structural characterizations of âeasy â functions, which are useful both for proving circuit lower bounds and for designing âmeta-algorithmsâ (such as Circuit-SAT). For (de Morgan) formulas, such structural characterization is provided by the âshrinkage under random restrictions â results [Sub61, HÌas98], strengthened to the âhigh-probability â version by [San10, IMZ12, KR13]. We give a new, simple proof of the âhigh-probability â version of the shrinkage result for (de Morgan) formulas, with improved parameters. We use this shrinkage result to get both compression and #SAT algorithms for (de Morgan) formulas of size about n2. We also use this shrinkage result to get an alternative proof of the recent result by Komargodski and Raz [KR13] of the average-case lower bound against small (de Morgan) formulas. Finally, we show that the existence of any non-trivial compression algorithm for a circuit class C â P/poly would imply the circuit lower bound NEXP 6 â C; a similar implication is independently proved also by Williams [Wil13]. This complements Williamsâs result [Wil10] that any non-trivial Circuit-SAT algorithm for a circuit class C would imply a superpolynomial lower bound against C for a language in NEXP
A strong direct product theorem for quantum query complexity
We show that quantum query complexity satisfies a strong direct product
theorem. This means that computing copies of a function with less than
times the quantum queries needed to compute one copy of the function implies
that the overall success probability will be exponentially small in . For a
boolean function we also show an XOR lemma---computing the parity of
copies of with less than times the queries needed for one copy implies
that the advantage over random guessing will be exponentially small.
We do this by showing that the multiplicative adversary method, which
inherently satisfies a strong direct product theorem, is always at least as
large as the additive adversary method, which is known to characterize quantum
query complexity.Comment: V2: 19 pages (various additions and improvements, in particular:
improved parameters in the main theorems due to a finer analysis of the
output condition, and addition of an XOR lemma and a threshold direct product
theorem in the boolean case). V3: 19 pages (added grant information
Weak derandomization of weak algorithms: explicit versions of Yaoâs lemma
AbstractâA simple averaging argument shows that given a randomized algorithm A and a function f such that for every input x, Pr[A(x) =f(x)] â„ 1âÏ (where the probability is over the coin tosses of A), there exists a nonuniform deterministic algorithm B âof roughly the same complexity â such that Pr[B(x) = f(x)] â„ 1 â Ï (where the probability is over a uniformly chosen input x). This implication is often referred to as âthe easy direction of Yaoâs lemma â and can be thought of as âweak derandomization â in the sense that B is deterministic but only succeeds on most inputs. The implication follows as there exists a fixed value r âČ for the random coins of A such that âhardwiring r âČ into A â produces a deterministic algorithm B. However, this argument does not give a way to explicitly construct B. In this paper we consider the task of proving uniform versions of the implication above. That is, how to explicitly construct a deterministic algorithm B when given a randomized algorithm A. We prove such derandomization results for several classes of randomized algorithms. These include: randomized communication protocols, randomized decision trees (here we improve a previous result by Zimand), randomized streaming algorithms and randomized algorithms computed by polynomial size constant depth circuits. Our proof uses an approach suggested by Goldreich and Wigderson and âextracts randomness from the inputâ. We show that specialized (seedless) extractors can produce randomness that is in some sense not correlated with the input. Our analysis can be applied to any class of randomized algorithms as long as one can explicitly construct the appropriate extractor. Some of our derandomization results follow by constructing a new notion of seedless extractors that we call âextractors for recognizable distributions â which may be of independent interest
- âŠ