437 research outputs found
Pseudorandomness for Approximate Counting and Sampling
We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions.
Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent.
We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM.
We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice
Improved Pseudorandom Generators from Pseudorandom Multi-Switching Lemmas
We give the best known pseudorandom generators for two touchstone classes in
unconditional derandomization: an -PRG for the class of size-
depth- circuits with seed length , and an -PRG for the class of -sparse
polynomials with seed length . These results bring the state of the art for
unconditional derandomization of these classes into sharp alignment with the
state of the art for computational hardness for all parameter settings:
improving on the seed lengths of either PRG would require breakthrough progress
on longstanding and notorious circuit lower bounds.
The key enabling ingredient in our approach is a new \emph{pseudorandom
multi-switching lemma}. We derandomize recently-developed
\emph{multi}-switching lemmas, which are powerful generalizations of
H{\aa}stad's switching lemma that deal with \emph{families} of depth-two
circuits. Our pseudorandom multi-switching lemma---a randomness-efficient
algorithm for sampling restrictions that simultaneously simplify all circuits
in a family---achieves the parameters obtained by the (full randomness)
multi-switching lemmas of Impagliazzo, Matthews, and Paturi [IMP12] and
H{\aa}stad [H{\aa}s14]. This optimality of our derandomization translates into
the optimality (given current circuit lower bounds) of our PRGs for
and sparse polynomials
Quantified Derandomization of Linear Threshold Circuits
One of the prominent current challenges in complexity theory is the attempt
to prove lower bounds for , the class of constant-depth, polynomial-size
circuits with majority gates. Relying on the results of Williams (2013), an
appealing approach to prove such lower bounds is to construct a non-trivial
derandomization algorithm for . In this work we take a first step towards
the latter goal, by proving the first positive results regarding the
derandomization of circuits of depth .
Our first main result is a quantified derandomization algorithm for
circuits with a super-linear number of wires. Specifically, we construct an
algorithm that gets as input a circuit over input bits with
depth and wires, runs in almost-polynomial-time, and
distinguishes between the case that rejects at most inputs
and the case that accepts at most inputs. In fact, our
algorithm works even when the circuit is a linear threshold circuit, rather
than just a circuit (i.e., is a circuit with linear threshold gates,
which are stronger than majority gates).
Our second main result is that even a modest improvement of our quantified
derandomization algorithm would yield a non-trivial algorithm for standard
derandomization of all of , and would consequently imply that
. Specifically, if there exists a quantified
derandomization algorithm that gets as input a circuit with depth
and wires (rather than wires), runs in time at
most , and distinguishes between the case that rejects at
most inputs and the case that accepts at most
inputs, then there exists an algorithm with running time
for standard derandomization of .Comment: Changes in this revision: An additional result (a PRG for quantified
derandomization of depth-2 LTF circuits); rewrite of some of the exposition;
minor correction
Recommended from our members
Pseudorandomness and Average-Case Complexity via Uniform Reductions
Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP [not equal to] BPP). Unlike results in the nonuniform setting, their result does not provide a continuous trade-off between worst-case hardness and pseudorandomness, nor does it explicitly establish an average-case hardness result.
In this paper:
1. We obtain an optimal worst-case to average-case connection for EXP: if EXP is not a subset of BPTIME(t(n)), EXP has problems that cannot be solved on a fraction 1/2 +1/t'(n) of the inputs by BPTIME(t'(n)) algorithms, for t'=t^{\Omega(1)}.
2. We exhibit a PSPACE-complete self-correctible and downward self-reducible problem. This slightly simplifies and strengthens the proof of Impaglaizzo and Wigderson, which used a a #P-complete problem with these properties.
3. We argue that the results of Impagliazzo and Wigderson, and the ones in this paper, cannot be proved via "black-box" uniform reductions.Engineering and Applied Science
Derandomization with Minimal Memory Footprint
Existing proofs that deduce BPL = ? from circuit lower bounds convert randomized algorithms into deterministic algorithms with large constant overhead in space. We study space-bounded derandomization with minimal footprint, and ask what is the minimal possible space overhead for derandomization. We show that BPSPACE[S] ? DSPACE[c ? S] for c ? 2, assuming space-efficient cryptographic PRGs, and, either: (1) lower bounds against bounded-space algorithms with advice, or: (2) lower bounds against certain uniform compression algorithms. Under additional assumptions regarding the power of catalytic computation, in a new setting of parameters that was not studied before, we are even able to get c ? 1.
Our results are constructive: Given a candidate hard function (and a candidate cryptographic PRG) we show how to transform the randomized algorithm into an efficient deterministic one. This follows from new PRGs and targeted PRGs for space-bounded algorithms, which we combine with novel space-efficient evaluation methods. A central ingredient in all our constructions is hardness amplification reductions in logspace-uniform TC?, that were not known before
On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization
The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r ? {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < ? < 1 < B, and functions computable in time 2^{B ? n} that cannot be computed by circuits of size 2^{? ? n}.
Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))? log m, under qualitatively stronger assumptions.
We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which ? = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer:
- The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor).
To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions.
- The construction of Chen and Tell composes two PRGs: G?:{0,1}^{(1+o(1)) ? log m} ? {0,1}^{r? = m^{?(1)}} and G?:{0,1}^{r?} ? {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make ?(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G? from the extreme high-end hardness assumption.
The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms
- …