162 research outputs found
A New Approximate Min-Max Theorem with Applications in Cryptography
We propose a novel proof technique that can be applied to attack a broad
class of problems in computational complexity, when switching the order of
universal and existential quantifiers is helpful. Our approach combines the
standard min-max theorem and convex approximation techniques, offering
quantitative improvements over the standard way of using min-max theorems as
well as more concise and elegant proofs
A Uniform Min-Max Theorem with Applications in Cryptography
We present a new, more constructive proof of von Neumannâs Min-Max Theorem for two-player zero-sum game â specifically, an algorithm that builds a near-optimal mixed strategy for the second player from several best-responses of the second player to mixed strategies of the first player. The algorithm extends previous work of Freund and Schapire (Games and Economic Behavior â99) with the advantage that the algorithm runs in poly(n) time even when a pure strategy for the first player is a distribution chosen from a set of distributions over {0, 1} . This extension enables a number of additional applications in cryptography and complexity theory, often yielding uniform security versions of results that were previously only proved for nonuniform security (due to use of the non-constructive Min-Max Theorem).
We describe several applications, including a more modular and improved uniform version of Impagliazzoâs Hardcore Theorem (FOCS â95), showing impossibility of constructing succinct non-interactive arguments (SNARGs) via black-box reductions under uniform hardness assumptions (using techniques from Gentry and Wichs (STOC â11) for the nonuniform setting), and efficiently simulating high entropy distributions within any sufficiently nice convex set (extending a result of Trevisan, Tulsiani and Vadhan (CCC â09)).Engineering and Applied Science
Recommended from our members
Characterizing Pseudoentropy and Simplifying Pseudorandom Generator Constructions
We provide a characterization of pseudoentropy in terms of hardness of sampling: Let (X,B) be jointly distributed random variables such that B takes values in a polynomial-sized set. We show that B is computationally indistinguishable from a random variable of higher Shannon entropy given X if and only if there is no probabilistic polynomial-time S such that (X,S(X)) has small KL divergence from (X,B). This can be viewed as an analogue of the Impagliazzo Hardcore Theorem (FOCS '95) for Shannon entropy (rather than min-entropy).
Using this characterization, we show that if f is a one-way function, then (f(Un),Un) has "next-bit pseudoentropy" at least n+log n, establishing a conjecture of Haitner, Reingold, and Vadhan (STOC '10). Plugging this into the construction of Haitner et al., this yields a simpler construction of pseudorandom generators from one-way functions. In particular, the construction only performs hashing once, and only needs the hash functions that are randomness extractors (e.g. universal hash functions) rather than needing them to support "local list-decoding" (as in the Goldreich--Levin hardcore predicate, STOC '89).
With an additional idea, we also show how to improve the seed length of the pseudorandom generator to ~{O}(n3), compared to O(n4) in the construction of Haitner et al.Engineering and Applied Science
Immunity and Pseudorandomness of Context-Free Languages
We discuss the computational complexity of context-free languages,
concentrating on two well-known structural properties---immunity and
pseudorandomness. An infinite language is REG-immune (resp., CFL-immune) if it
contains no infinite subset that is a regular (resp., context-free) language.
We prove that (i) there is a context-free REG-immune language outside REG/n and
(ii) there is a REG-bi-immune language that can be computed deterministically
using logarithmic space. We also show that (iii) there is a CFL-simple set,
where a CFL-simple language is an infinite context-free language whose
complement is CFL-immune. Similar to the REG-immunity, a REG-primeimmune
language has no polynomially dense subsets that are also regular. We further
prove that (iv) there is a context-free language that is REG/n-bi-primeimmune.
Concerning pseudorandomness of context-free languages, we show that (v) CFL
contains REG/n-pseudorandom languages. Finally, we prove that (vi) against
REG/n, there exists an almost 1-1 pseudorandom generator computable in
nondeterministic pushdown automata equipped with a write-only output tape and
(vii) against REG, there is no almost 1-1 weakly pseudorandom generator
computable deterministically in linear time by a single-tape Turing machine.Comment: A4, 23 pages, 10 pt. A complete revision of the initial version that
was posted in February 200
Hardness of KT Characterizes Parallel Cryptography
A recent breakthrough of Liu and Pass (FOCS'20) shows that one-way functions exist if and only if the (polynomial-)time-bounded Kolmogorov complexity, K^t, is bounded-error hard on average to compute. In this paper, we strengthen this result and extend it to other complexity measures:
- We show, perhaps surprisingly, that the KT complexity is bounded-error average-case hard if and only if there exist one-way functions in constant parallel time (i.e. NCâ°). This result crucially relies on the idea of randomized encodings. Previously, a seminal work of Applebaum, Ishai, and Kushilevitz (FOCS'04; SICOMP'06) used the same idea to show that NCâ°-computable one-way functions exist if and only if logspace-computable one-way functions exist.
- Inspired by the above result, we present randomized average-case reductions among the NCÂč-versions and logspace-versions of K^t complexity, and the KT complexity. Our reductions preserve both bounded-error average-case hardness and zero-error average-case hardness. To the best of our knowledge, this is the first reduction between the KT complexity and a variant of K^t complexity.
- We prove tight connections between the hardness of K^t complexity and the hardness of (the hardest) one-way functions. In analogy with the Exponential-Time Hypothesis and its variants, we define and motivate the Perebor Hypotheses for complexity measures such as K^t and KT. We show that a Strong Perebor Hypothesis for K^t implies the existence of (weak) one-way functions of near-optimal hardness 2^{n-o(n)}. To the best of our knowledge, this is the first construction of one-way functions of near-optimal hardness based on a natural complexity assumption about a search problem.
- We show that a Weak Perebor Hypothesis for MCSP implies the existence of one-way functions, and establish a partial converse. This is the first unconditional construction of one-way functions from the hardness of MCSP over a natural distribution.
- Finally, we study the average-case hardness of MKtP. We show that it characterizes cryptographic pseudorandomness in one natural regime of parameters, and complexity-theoretic pseudorandomness in another natural regime.</p
A Uniform Min-Max Theorem and Characterizations of Computational Randomness
This thesis develops several tools and techniques using ideas from information theory, optimization, and online learning, and applies them to a number of highly related fundamental problems in complexity theory, pseudorandomness theory, and cryptography.Engineering and Applied Science
Comparing Computational Entropies Below Majority (Or: When Is the Dense Model Theorem False?)
Computational pseudorandomness studies the extent to which a random variable
looks like the uniform distribution according to a class of tests
. Computational entropy generalizes computational pseudorandomness by
studying the extent which a random variable looks like a \emph{high entropy}
distribution. There are different formal definitions of computational entropy
with different advantages for different applications. Because of this, it is of
interest to understand when these definitions are equivalent.
We consider three notions of computational entropy which are known to be
equivalent when the test class is closed under taking majorities.
This equivalence constitutes (essentially) the so-called \emph{dense model
theorem} of Green and Tao (and later made explicit by Tao-Zeigler, Reingold et
al., and Gowers). The dense model theorem plays a key role in Green and Tao's
proof that the primes contain arbitrarily long arithmetic progressions and has
since been connected to a surprisingly wide range of topics in mathematics and
computer science, including cryptography, computational complexity,
combinatorics and machine learning. We show that, in different situations where
is \emph{not} closed under majority, this equivalence fails. This in
turn provides examples where the dense model theorem is \emph{false}.Comment: 19 pages; to appear in ITCS 202
Pseudorandom generators and the BQP vs. PH problem
It is a longstanding open problem to devise an oracle relative to which BQP
does not lie in the Polynomial-Time Hierarchy (PH). We advance a natural
conjecture about the capacity of the Nisan-Wigderson pseudorandom generator
[NW94] to fool AC_0, with MAJORITY as its hard function. Our conjecture is
essentially that the loss due to the hybrid argument (which is a component of
the standard proof from [NW94]) can be avoided in this setting. This is a
question that has been asked previously in the pseudorandomness literature
[BSW03]. We then make three main contributions: (1) We show that our conjecture
implies the existence of an oracle relative to which BQP is not in the PH. This
entails giving an explicit construction of unitary matrices, realizable by
small quantum circuits, whose row-supports are "nearly-disjoint." (2) We give a
simple framework (generalizing the setting of Aaronson [A10]) in which any
efficiently quantumly computable unitary gives rise to a distribution that can
be distinguished from the uniform distribution by an efficient quantum
algorithm. When applied to the unitaries we construct, this framework yields a
problem that can be solved quantumly, and which forms the basis for the desired
oracle. (3) We prove that Aaronson's "GLN conjecture" [A10] implies our
conjecture; our conjecture is thus formally easier to prove. The GLN conjecture
was recently proved false for depth greater than 2 [A10a], but it remains open
for depth 2. If true, the depth-2 version of either conjecture would imply an
oracle relative to which BQP is not in AM, which is itself an outstanding open
problem. Taken together, our results have the following interesting
interpretation: they give an instantiation of the Nisan-Wigderson generator
that can be broken by quantum computers, but not by the relevant modes of
classical computation, if our conjecture is true.Comment: Updated in light of counterexample to the GLN conjectur
- âŠ