341 research outputs found
Pseudorandomness for Approximate Counting and Sampling
We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions.
Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent.
We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM.
We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice
Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness
Polynomial approximations to boolean functions have led to many positive
results in computer science. In particular, polynomial approximations to the
sign function underly algorithms for agnostically learning halfspaces, as well
as pseudorandom generators for halfspaces. In this work, we investigate the
limits of these techniques by proving inapproximability results for the sign
function.
Firstly, the polynomial regression algorithm of Kalai et al. (SIAM J. Comput.
2008) shows that halfspaces can be learned with respect to log-concave
distributions on in the challenging agnostic learning model. The
power of this algorithm relies on the fact that under log-concave
distributions, halfspaces can be approximated arbitrarily well by low-degree
polynomials. We ask whether this technique can be extended beyond log-concave
distributions, and establish a negative result. We show that polynomials of any
degree cannot approximate the sign function to within arbitrarily low error for
a large class of non-log-concave distributions on the real line, including
those with densities proportional to .
Secondly, we investigate the derandomization of Chernoff-type concentration
inequalities. Chernoff-type tail bounds on sums of independent random variables
have pervasive applications in theoretical computer science. Schmidt et al.
(SIAM J. Discrete Math. 1995) showed that these inequalities can be established
for sums of random variables with only -wise independence,
for a tail probability of . We show that their results are tight up to
constant factors.
These results rely on techniques from weighted approximation theory, which
studies how well functions on the real line can be approximated by polynomials
under various distributions. We believe that these techniques will have further
applications in other areas of computer science.Comment: 22 page
Pseudorandom generators and the BQP vs. PH problem
It is a longstanding open problem to devise an oracle relative to which BQP
does not lie in the Polynomial-Time Hierarchy (PH). We advance a natural
conjecture about the capacity of the Nisan-Wigderson pseudorandom generator
[NW94] to fool AC_0, with MAJORITY as its hard function. Our conjecture is
essentially that the loss due to the hybrid argument (which is a component of
the standard proof from [NW94]) can be avoided in this setting. This is a
question that has been asked previously in the pseudorandomness literature
[BSW03]. We then make three main contributions: (1) We show that our conjecture
implies the existence of an oracle relative to which BQP is not in the PH. This
entails giving an explicit construction of unitary matrices, realizable by
small quantum circuits, whose row-supports are "nearly-disjoint." (2) We give a
simple framework (generalizing the setting of Aaronson [A10]) in which any
efficiently quantumly computable unitary gives rise to a distribution that can
be distinguished from the uniform distribution by an efficient quantum
algorithm. When applied to the unitaries we construct, this framework yields a
problem that can be solved quantumly, and which forms the basis for the desired
oracle. (3) We prove that Aaronson's "GLN conjecture" [A10] implies our
conjecture; our conjecture is thus formally easier to prove. The GLN conjecture
was recently proved false for depth greater than 2 [A10a], but it remains open
for depth 2. If true, the depth-2 version of either conjecture would imply an
oracle relative to which BQP is not in AM, which is itself an outstanding open
problem. Taken together, our results have the following interesting
interpretation: they give an instantiation of the Nisan-Wigderson generator
that can be broken by quantum computers, but not by the relevant modes of
classical computation, if our conjecture is true.Comment: Updated in light of counterexample to the GLN conjectur
On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization
The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r ? {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < ? < 1 < B, and functions computable in time 2^{B ? n} that cannot be computed by circuits of size 2^{? ? n}.
Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))? log m, under qualitatively stronger assumptions.
We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which ? = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer:
- The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor).
To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions.
- The construction of Chen and Tell composes two PRGs: G?:{0,1}^{(1+o(1)) ? log m} ? {0,1}^{r? = m^{?(1)}} and G?:{0,1}^{r?} ? {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make ?(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G? from the extreme high-end hardness assumption.
The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms
Better Pseudorandom Generators from Milder Pseudorandom Restrictions
We present an iterative approach to constructing pseudorandom generators,
based on the repeated application of mild pseudorandom restrictions. We use
this template to construct pseudorandom generators for combinatorial rectangles
and read-once CNFs and a hitting set generator for width-3 branching programs,
all of which achieve near-optimal seed-length even in the low-error regime: We
get seed-length O(log (n/epsilon)) for error epsilon. Previously, only
constructions with seed-length O(\log^{3/2} n) or O(\log^2 n) were known for
these classes with polynomially small error.
The (pseudo)random restrictions we use are milder than those typically used
for proving circuit lower bounds in that we only set a constant fraction of the
bits at a time. While such restrictions do not simplify the functions
drastically, we show that they can be derandomized using small-bias spaces.Comment: To appear in FOCS 201
Algorithms and lower bounds for de Morgan formulas of low-communication leaf gates
The class consists of Boolean functions
computable by size- de Morgan formulas whose leaves are any Boolean
functions from a class . We give lower bounds and (SAT, Learning,
and PRG) algorithms for , for classes
of functions with low communication complexity. Let
be the maximum -party NOF randomized communication
complexity of . We show:
(1) The Generalized Inner Product function cannot be computed in
on more than fraction of inputs
for As a corollary, we get an average-case lower bound for
against .
(2) There is a PRG of seed length that -fools . For
, we get the better seed length . This gives the first
non-trivial PRG (with seed length ) for intersections of half-spaces
in the regime where .
(3) There is a randomized -time SAT algorithm for , where In particular, this implies a nontrivial
#SAT algorithm for .
(4) The Minimum Circuit Size Problem is not in .
On the algorithmic side, we show that can be
PAC-learned in time
- …