7,935 research outputs found
PPP-Completeness with Connections to Cryptography
Polynomial Pigeonhole Principle (PPP) is an important subclass of TFNP with
profound connections to the complexity of the fundamental cryptographic
primitives: collision-resistant hash functions and one-way permutations. In
contrast to most of the other subclasses of TFNP, no complete problem is known
for PPP. Our work identifies the first PPP-complete problem without any circuit
or Turing Machine given explicitly in the input, and thus we answer a
longstanding open question from [Papadimitriou1994]. Specifically, we show that
constrained-SIS (cSIS), a generalized version of the well-known Short Integer
Solution problem (SIS) from lattice-based cryptography, is PPP-complete.
In order to give intuition behind our reduction for constrained-SIS, we
identify another PPP-complete problem with a circuit in the input but closely
related to lattice problems. We call this problem BLICHFELDT and it is the
computational problem associated with Blichfeldt's fundamental theorem in the
theory of lattices.
Building on the inherent connection of PPP with collision-resistant hash
functions, we use our completeness result to construct the first natural hash
function family that captures the hardness of all collision-resistant hash
functions in a worst-case sense, i.e. it is natural and universal in the
worst-case. The close resemblance of our hash function family with SIS, leads
us to the first candidate collision-resistant hash function that is both
natural and universal in an average-case sense.
Finally, our results enrich our understanding of the connections between PPP,
lattice problems and other concrete cryptographic assumptions, such as the
discrete logarithm problem over general groups
On the Cryptographic Hardness of Local Search
We show new hardness results for the class of Polynomial Local Search problems (PLS):
- Hardness of PLS based on a falsifiable assumption on bilinear groups introduced by Kalai, Paneth, and Yang (STOC 2019), and the Exponential Time Hypothesis for randomized algorithms. Previous standard model constructions relied on non-falsifiable and non-standard assumptions.
- Hardness of PLS relative to random oracles. The construction is essentially different than previous constructions, and in particular is unconditionally secure. The construction also demonstrates the hardness of parallelizing local search.
The core observation behind the results is that the unique proofs property of incrementally-verifiable computations previously used to demonstrate hardness in PLS can be traded with a simple incremental completeness property
Public-Key Cryptography in the Fine-Grained Setting
Cryptography is largely based on unproven assumptions, which, while believable, might fail. Notably if , or if we live in Pessiland, then all current cryptographic assumptions will be broken. A compelling question is if any interesting cryptography might exist in Pessiland.
A natural approach to tackle this question is to base cryptography on an assumption from fine-grained complexity.
Ball, Rosen, Sabin, and Vasudevan [BRSV\u2717] attempted this, starting from popular hardness assumptions, such as the Orthogonal Vectors (OV) Conjecture. They obtained problems that are hard on average, assuming that OV and other problems are hard in the worst case. They obtained proofs of work, and hoped to use their average-case hard problems to build a fine-grained one-way function.
Unfortunately, they proved that constructing one using their approach would violate a popular hardness hypothesis. This motivates the search for other fine-grained average-case hard problems.
The main goal of this paper is to identify sufficient properties for a fine-grained average-case assumption that imply cryptographic primitives such as fine-grained public key cryptography (PKC).
Our main contribution is a novel construction of a cryptographic key exchange,
together with the definition of a small number of relatively weak structural properties, such that if a computational problem satisfies them, our key exchange has provable fine-grained security guarantees, based on the hardness of this problem. We then show that a natural and plausible average-case assumption for the key problem Zero--Clique from fine-grained complexity satisfies our properties. We also develop fine-grained one-way functions and hardcore bits even under these weaker assumptions.
Where previous works had to assume random oracles or the existence of strong one-way functions to get a key-exchange computable in time secure against adversaries (see [Merkle\u2778] and [BGI\u2708]), our assumptions seem much weaker. Our key exchange has a similar gap between the computation of the honest party and the adversary as prior work, while being non-interactive, implying fine-grained PKC
A New View on Worst-Case to Average-Case Reductions for NP Problems
We study the result by Bogdanov and Trevisan (FOCS, 2003), who show that
under reasonable assumptions, there is no non-adaptive worst-case to
average-case reduction that bases the average-case hardness of an NP-problem on
the worst-case complexity of an NP-complete problem. We replace the hiding and
the heavy samples protocol in [BT03] by employing the histogram verification
protocol of Haitner, Mahmoody and Xiao (CCC, 2010), which proves to be very
useful in this context. Once the histogram is verified, our hiding protocol is
directly public-coin, whereas the intuition behind the original protocol
inherently relies on private coins
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
From average case complexity to improper learning complexity
The basic problem in the PAC model of computational learning theory is to
determine which hypothesis classes are efficiently learnable. There is
presently a dearth of results showing hardness of learning problems. Moreover,
the existing lower bounds fall short of the best known algorithms.
The biggest challenge in proving complexity results is to establish hardness
of {\em improper learning} (a.k.a. representation independent learning).The
difficulty in proving lower bounds for improper learning is that the standard
reductions from -hard problems do not seem to apply in this
context. There is essentially only one known approach to proving lower bounds
on improper learning. It was initiated in (Kearns and Valiant 89) and relies on
cryptographic assumptions.
We introduce a new technique for proving hardness of improper learning, based
on reductions from problems that are hard on average. We put forward a (fairly
strong) generalization of Feige's assumption (Feige 02) about the complexity of
refuting random constraint satisfaction problems. Combining this assumption
with our new technique yields far reaching implications. In particular,
1. Learning 's is hard.
2. Agnostically learning halfspaces with a constant approximation ratio is
hard.
3. Learning an intersection of halfspaces is hard.Comment: 34 page
- …