10,974 research outputs found
On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization
The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r ? {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < ? < 1 < B, and functions computable in time 2^{B ? n} that cannot be computed by circuits of size 2^{? ? n}.
Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))? log m, under qualitatively stronger assumptions.
We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which ? = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer:
- The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor).
To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions.
- The construction of Chen and Tell composes two PRGs: G?:{0,1}^{(1+o(1)) ? log m} ? {0,1}^{r? = m^{?(1)}} and G?:{0,1}^{r?} ? {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make ?(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G? from the extreme high-end hardness assumption.
The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms
Hardness Amplification of Optimization Problems
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products.
We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows:
If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27.
As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium
A PCP Characterization of AM
We introduce a 2-round stochastic constraint-satisfaction problem, and show
that its approximation version is complete for (the promise version of) the
complexity class AM. This gives a `PCP characterization' of AM analogous to the
PCP Theorem for NP. Similar characterizations have been given for higher levels
of the Polynomial Hierarchy, and for PSPACE; however, we suggest that the
result for AM might be of particular significance for attempts to derandomize
this class.
To test this notion, we pose some `Randomized Optimization Hypotheses'
related to our stochastic CSPs that (in light of our result) would imply
collapse results for AM. Unfortunately, the hypotheses appear over-strong, and
we present evidence against them. In the process we show that, if some language
in NP is hard-on-average against circuits of size 2^{Omega(n)}, then there
exist hard-on-average optimization problems of a particularly elegant form.
All our proofs use a powerful form of PCPs known as Probabilistically
Checkable Proofs of Proximity, and demonstrate their versatility. We also use
known results on randomness-efficient soundness- and hardness-amplification. In
particular, we make essential use of the Impagliazzo-Wigderson generator; our
analysis relies on a recent Chernoff-type theorem for expander walks.Comment: 18 page
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
A Nearly Optimal Lower Bound on the Approximate Degree of AC
The approximate degree of a Boolean function is the least degree of a real polynomial that
approximates pointwise to error at most . We introduce a generic
method for increasing the approximate degree of a given function, while
preserving its computability by constant-depth circuits.
Specifically, we show how to transform any Boolean function with
approximate degree into a function on variables with approximate degree at least . In particular, if , then
is polynomially larger than . Moreover, if is computed by a
polynomial-size Boolean circuit of constant depth, then so is .
By recursively applying our transformation, for any constant we
exhibit an AC function of approximate degree . This
improves over the best previous lower bound of due to
Aaronson and Shi (J. ACM 2004), and nearly matches the trivial upper bound of
that holds for any function. Our lower bounds also apply to
(quasipolynomial-size) DNFs of polylogarithmic width.
We describe several applications of these results. We give:
* For any constant , an lower bound on the
quantum communication complexity of a function in AC.
* A Boolean function with approximate degree at least ,
where is the certificate complexity of . This separation is optimal
up to the term in the exponent.
* Improved secret sharing schemes with reconstruction procedures in AC.Comment: 40 pages, 1 figur
- …