466 research outputs found
On Nonadaptive Security Reductions of Hitting Set Generators
One of the central open questions in the theory of average-case complexity is to establish the equivalence between the worst-case and average-case complexity of the Polynomial-time Hierarchy (PH). One general approach is to show that there exists a PH-computable hitting set generator whose security is based on some NP-hard problem. We present the limits of such an approach, by showing that there exists no exponential-time-computable hitting set generator whose security can be proved by using a nonadaptive randomized polynomial-time reduction from any problem outside AM ? coAM, which significantly improves the previous upper bound BPP^NP of Gutfreund and Vadhan (RANDOM/APPROX 2008 [Gutfreund and Vadhan, 2008]). In particular, any security proof of a hitting set generator based on some NP-hard problem must use either an adaptive or non-black-box reduction (unless the polynomial-time hierarchy collapses). To the best of our knowledge, this is the first result that shows limits of black-box reductions from an NP-hard problem to some form of a distributional problem in DistPH.
Based on our results, we argue that the recent worst-case to average-case reduction of Hirahara (FOCS 2018 [Hirahara, 2018]) is inherently non-black-box, without relying on any unproven assumptions. On the other hand, combining the non-black-box reduction with our simulation technique of black-box reductions, we exhibit the existence of a "non-black-box selector" for GapMCSP, i.e., an efficient algorithm that solves GapMCSP given as advice two circuits one of which is guaranteed to compute GapMCSP
A Relativization Perspective on Meta-Complexity
Meta-complexity studies the complexity of computational problems about complexity theory, such as the Minimum Circuit Size Problem (MCSP) and its variants. We show that a relativization barrier applies to many important open questions in meta-complexity. We give relativized worlds where:
1) MCSP can be solved in deterministic polynomial time, but the search version of MCSP cannot be solved in deterministic polynomial time, even approximately. In contrast, Carmosino, Impagliazzo, Kabanets, Kolokolova [CCC'16] gave a randomized approximate search-to-decision reduction for MCSP with a relativizing proof.
2) The complexities of MCSP[2^{n/2}] and MCSP[2^{n/4}] are different, in both worst-case and average-case settings. Thus the complexity of MCSP is not "robust" to the choice of the size function.
3) Levin’s time-bounded Kolmogorov complexity Kt(x) can be approximated to a factor (2+ε) in polynomial time, for any ε > 0.
4) Natural proofs do not exist, and neither do auxiliary-input one-way functions. In contrast, Santhanam [ITCS'20] gave a relativizing proof that the non-existence of natural proofs implies the existence of one-way functions under a conjecture about optimal hitting sets.
5) DistNP does not reduce to GapMINKT by a family of "robust" reductions. This presents a technical barrier for solving a question of Hirahara [FOCS'20]
Kolmogorov Complexity Characterizes Statistical Zero Knowledge
We show that a decidable promise problem has a non-interactive statistical zero-knowledge proof system if and only if it is randomly reducible via an honest polynomial-time reduction to a promise problem for Kolmogorov-random strings, with a superlogarithmic additive approximation term. This extends recent work by Saks and Santhanam (CCC 2022). We build on this to give new characterizations of Statistical Zero Knowledge SZK, as well as the related classes NISZK_L and SZK_L
Errorless Versus Error-Prone Average-Case Complexity
We consider the question of whether errorless and error-prone notions of average-case hardness are equivalent, and make several contributions.
First, we study this question in the context of hardness for NP, and connect it to the long-standing open question of whether there are instance checkers for NP. We show that there is an efficient non-uniform non-adaptive reduction from errorless to error-prone heuristics for NP if and only if there is an efficient non-uniform average-case non-adaptive instance-checker for NP. We also suggest an approach to proving equivalence of the two notions of average-case hardness for PH.
Second, we show unconditionally that error-prone average-case hardness is equivalent to errorless average-case hardness for P against NC¹ and for UP ∩ coUP against P.
Third, we apply our results about errorless and error-prone average-case hardness to get new equivalences between hitting set generators and pseudo-random generators
Excluding PH Pessiland
Heuristica and Pessiland are "worlds" of average-case complexity [Impagliazzo95] that are considered unlikely but that current techniques are unable to rule out. Recently, [Hirahara20] considered a PH (Polynomial Hierarchy) analogue of Heuristica, and showed that to rule it out, it would be sufficient to prove the NP-completeness of the problem GapMINKT^PH of estimating the PH-oracle time-bounded Kolmogorov complexity of a string.
In this work, we analogously define "PH Pessiland" to be a world where PH is hard on average but PH-computable pseudo-random generators do not exist. We unconditionally rule out PH-Pessiland in both non-uniform and uniform settings, by showing that the distributional problem of computing PH-oracle time-bounded Kolmogorov complexity of a string over the uniform distribution is complete for an (error-prone) average-case analogue of PH. Moreover, we show the equivalence between error-prone average-case hardness of PH and the existence of PH-computable pseudorandom generators
Hardness of KT Characterizes Parallel Cryptography
A recent breakthrough of Liu and Pass (FOCS'20) shows that one-way functions exist if and only if the (polynomial-)time-bounded Kolmogorov complexity, K^t, is bounded-error hard on average to compute. In this paper, we strengthen this result and extend it to other complexity measures:
- We show, perhaps surprisingly, that the KT complexity is bounded-error average-case hard if and only if there exist one-way functions in constant parallel time (i.e. NC⁰). This result crucially relies on the idea of randomized encodings. Previously, a seminal work of Applebaum, Ishai, and Kushilevitz (FOCS'04; SICOMP'06) used the same idea to show that NC⁰-computable one-way functions exist if and only if logspace-computable one-way functions exist.
- Inspired by the above result, we present randomized average-case reductions among the NC¹-versions and logspace-versions of K^t complexity, and the KT complexity. Our reductions preserve both bounded-error average-case hardness and zero-error average-case hardness. To the best of our knowledge, this is the first reduction between the KT complexity and a variant of K^t complexity.
- We prove tight connections between the hardness of K^t complexity and the hardness of (the hardest) one-way functions. In analogy with the Exponential-Time Hypothesis and its variants, we define and motivate the Perebor Hypotheses for complexity measures such as K^t and KT. We show that a Strong Perebor Hypothesis for K^t implies the existence of (weak) one-way functions of near-optimal hardness 2^{n-o(n)}. To the best of our knowledge, this is the first construction of one-way functions of near-optimal hardness based on a natural complexity assumption about a search problem.
- We show that a Weak Perebor Hypothesis for MCSP implies the existence of one-way functions, and establish a partial converse. This is the first unconditional construction of one-way functions from the hardness of MCSP over a natural distribution.
- Finally, we study the average-case hardness of MKtP. We show that it characterizes cryptographic pseudorandomness in one natural regime of parameters, and complexity-theoretic pseudorandomness in another natural regime.</p
Average-Case Hardness of Proving Tautologies and Theorems
We consolidate two widely believed conjectures about tautologies -- no
optimal proof system exists, and most require superpolynomial size proofs in
any system -- into a -isomorphism-invariant condition satisfied by all
paddable -complete languages or none. The condition is: for any
Turing machine (TM) accepting the language, -uniform input
families requiring superpolynomial time by exist (equivalent to the first
conjecture) and appear with positive upper density in an enumeration of input
families (implies the second). In that case, no such language is easy on
average (in ) for a distribution applying non-negligible weight
to the hard families.
The hardness of proving tautologies and theorems is likely related. Motivated
by the fact that arithmetic sentences encoding "string is Kolmogorov
random" are true but unprovable with positive density in a finitely axiomatized
theory (Calude and J{\"u}rgensen), we conjecture that any
propositional proof system requires superpolynomial size proofs for a dense set
of -uniform families of tautologies encoding "there is no
proof of size showing that string is Kolmogorov
random". This implies the above condition.
The conjecture suggests that there is no optimal proof system because
undecidable theories help prove tautologies and do so more efficiently as
axioms are added, and that constructing hard tautologies seems difficult
because it is impossible to construct Kolmogorov random strings. Similar
conjectures that computational blind spots are manifestations of
noncomputability would resolve other open problems
On the existence of strong proof complexity generators
Cook and Reckhow 1979 pointed out that NP is not closed under complementation
iff there is no propositional proof system that admits polynomial size proofs
of all tautologies. Theory of proof complexity generators aims at constructing
sets of tautologies hard for strong and possibly for all proof systems. We
focus at a conjecture from K.2004 in foundations of the theory that there is a
proof complexity generator hard for all proof systems. This can be equivalently
formulated (for p-time generators) without a reference to proof complexity
notions as follows:
* There exist a p-time function stretching each input by one bit such
that its range intersects all infinite NP sets.
We consider several facets of this conjecture, including its links to bounded
arithmetic (witnessing and independence results), to time-bounded Kolmogorov
complexity, to feasible disjunction property of propositional proof systems and
to complexity of proof search. We argue that a specific gadget generator from
K.2009 is a good candidate for . We define a new hardness property of
generators, the -hardness, and shows that one specific gadget
generator is the -hardest (w.r.t. any sufficiently strong proof
system). We define the class of feasibly infinite NP sets and show, assuming a
hypothesis from circuit complexity, that the conjecture holds for all feasibly
infinite NP sets.Comment: preliminary version August 2022, revised July 202
- …